modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
MilaNLProc/hate-ita | e86867f77ebeae746f68907f76506d8a172e4424 | 2022-07-07T15:32:29.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"it",
"arxiv:2104.12250",
"transformers",
"text classification",
"abusive language",
"hate speech",
"offensive language",
"license:mit"
] | text-classification | false | MilaNLProc | null | MilaNLProc/hate-ita | 56 | 1 | transformers | 5,800 | ---
language: it
license: mit
tags:
- text classification
- abusive language
- hate speech
- offensive language
widget:
- text: "Ci sono dei bellissimi capibara!"
example_title: "Hate Speech Classification 1"
- text: "Sei una testa di cazzo!!"
example_title: "Hate Speech Classification 2"
- text: "Ti odio!"
example_title: "Hate Speech Classification 3"
---
#
[Debora Nozza](http://dnozza.github.io/) •
[Federico Bianchi](https://federicobianchi.io/) •
[Giuseppe Attanasio](https://gattanasio.cc/)
# HATE-ITA Base
HATE-ITA is a binary hate speech classification model for Italian social media text.
<img src="https://raw.githubusercontent.com/MilaNLProc/hate-ita/main/hateita.png?token=GHSAT0AAAAAABTEBAJ4PNDWAMU3KKIGUOCSYWG4IBA" width="200">
## Abstract
Online hate speech is a dangerous phenomenon that can (and should) be promptly counteracted properly. While Natural Language Processing has been successfully used for the purpose, many of the research efforts are directed toward the English language. This choice severely limits the classification power in non-English languages. In this paper, we test several learning frameworks for identifying hate speech in Italian text. We release **HATE-ITA, a set of multi-language models trained on a large set of English data and available Italian datasets**. HATE-ITA performs better than mono-lingual models and seems to adapt well also on language-specific slurs. We believe our findings will encourage research in other mid-to-low resource communities and provide a valuable benchmarking tool for the Italian community.
## Model
This model is the fine-tuned version of the [XLM-T](https://arxiv.org/abs/2104.12250) model.
| Model | Download |
| ------ | -------------------------|
| `hate-ita` | [Link](https://huggingface.co/MilaNLProc/hate-ita) |
| `hate-ita-xlm-r-base` | [Link](https://huggingface.co/MilaNLProc/hate-ita-xlm-r-base) |
| `hate-ita-xlm-r-large` | [Link](https://huggingface.co/MilaNLProc/hate-ita-xlm-r-large) |
## Results
This model had an F1 of 0.83 on the test set.
## Usage
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='MilaNLProc/hate-ita',top_k=2)
prediction = classifier("ti odio")
print(prediction)
```
## Citation
Please use the following BibTeX entry if you use this model in your project:
```
@inproceedings{nozza-etal-2022-hate-ita,
title = {{HATE-ITA}: Hate Speech Detection in Italian Social Media Text},
author = "Nozza, Debora and Bianchi, Federico and Attanasio, Giuseppe",
booktitle = "Proceedings of the 6th Workshop on Online Abuse and Harms",
year = "2022",
publisher = "Association for Computational Linguistics"
}
```
## Ethical Statement
While promising, the results in this work should not be interpreted as a definitive assessment of the performance of hate speech detection in Italian. We are unsure if our model can maintain a stable and fair precision across the different targets and categories. HATE-ITA might overlook some sensible details, which practitioners should treat with care. |
BNZSA/distilbert-base-uncased-country-NER-address | 9f7f7600e4e7e90a28cd5a993842b01cad5f7259 | 2022-06-10T11:02:34.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"license:gpl-3.0"
] | text-classification | false | BNZSA | null | BNZSA/distilbert-base-uncased-country-NER-address | 56 | null | transformers | 5,801 | ---
license: gpl-3.0
---
|
microsoft/swinv2-tiny-patch4-window8-256 | 2b979ac403df19f72443cd151e9e957842eb9645 | 2022-07-07T14:18:14.000Z | [
"pytorch",
"swinv2",
"transformers"
] | null | false | microsoft | null | microsoft/swinv2-tiny-patch4-window8-256 | 56 | null | transformers | 5,802 | Entry not found |
fujiki/t5-efficient-xl-nl6-en2ja | 87dd3369f46ec03661da5a33be63a638f82a6c39 | 2022-07-04T03:48:38.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | fujiki | null | fujiki/t5-efficient-xl-nl6-en2ja | 56 | null | transformers | 5,803 | ---
license: afl-3.0
---
|
tezign/Erlangshen-Sentiment-FineTune | 23b9777a3d2355fdcfb06dea2d7b2a3e965746a4 | 2022-07-14T09:36:39.000Z | [
"pytorch",
"bert",
"text-classification",
"zh",
"transformers",
"sentiment-analysis"
] | text-classification | false | tezign | null | tezign/Erlangshen-Sentiment-FineTune | 56 | null | transformers | 5,804 | ---
language: zh
tags:
- sentiment-analysis
- pytorch
widget:
- text: "房间非常非常小,内窗,特别不透气,因为夜里走廊灯光是亮的,内窗对着走廊,窗帘又不能完全拉死,怎么都会有一道光射进来。"
- text: "尽快有洗衣房就好了。"
- text: "很好,干净整洁,交通方便。"
- text: "干净整洁很好"
---
# Note
BERT based sentiment analysis, finetune based on https://huggingface.co/IDEA-CCNL/Erlangshen-Roberta-330M-Sentiment .
The model trained on **hotel human review chinese dataset**.
# Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline
MODEL = "tezign/Erlangshen-Sentiment-FineTune"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModelForSequenceClassification.from_pretrained(MODEL, trust_remote_code=True)
classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer)
result = classifier("很好,干净整洁,交通方便。")
print(result)
"""
print result
>> [{'label': 'Positive', 'score': 0.989660382270813}]
"""
```
# Evaluate
We compared and evaluated the performance of **Our finetune model** and the **Original Erlangshen model** on the **hotel human review test dataset**(5429 negative reviews and 1251 positive reviews).
The results showed that our model substantial improved the precision and recall of positive reviews:
```text
Our finetune model:
precision recall f1-score support
Negative 0.99 0.98 0.98 5429
Positive 0.92 0.95 0.93 1251
accuracy 0.97 6680
macro avg 0.95 0.96 0.96 6680
weighted avg 0.97 0.97 0.97 6680
======================================================
Original Erlangshen model:
precision recall f1-score support
Negative 0.81 1.00 0.90 5429
Positive 0.00 0.00 0.00 1251
accuracy 0.81 6680
macro avg 0.41 0.50 0.45 6680
weighted avg 0.66 0.81 0.73 6680
``` |
ai4bharat/indicwav2vec-hindi | e746fa3e8de3f6629b9ded29d9c1c565c566f3db | 2022-07-27T20:31:31.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"arxiv:2006.11477",
"transformers",
"audio",
"speech",
"asr",
"license:apache-2.0"
] | automatic-speech-recognition | false | ai4bharat | null | ai4bharat/indicwav2vec-hindi | 56 | null | transformers | 5,805 | ---
language: hi
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- wav2vec2
- asr
license: apache-2.0
---
# IndicWav2Vec-Hindi
This is a [Wav2Vec2](https://arxiv.org/abs/2006.11477) style ASR model trained in [fairseq](https://github.com/facebookresearch/fairseq) and ported to Hugging Face.
More details on datasets, training-setup and conversion to HuggingFace format can be found in the [IndicWav2Vec](https://github.com/AI4Bharat/IndicWav2Vec) repo.
*Note: This model doesn't support inference with Language Model.*
## Script to Run Inference
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
DEVICE_ID = "cuda" if torch.cuda.is_available() else "cpu"
MODEL_ID = "ai4bharat/indicwav2vec-hindi"
sample = next(iter(load_dataset("common_voice", "hi", split="test", streaming=True)))
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48000, 16000).numpy()
model = AutoModelForCTC.from_pretrained(MODEL_ID).to(DEVICE_ID)
processor = AutoProcessor.from_pretrained(MODEL_ID)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values.to(DEVICE_ID)).logits.cpu()
prediction_ids = torch.argmax(logits, dim=-1)
output_str = processor.batch_decode(prediction_ids)[0]
print(f"Greedy Decoding: {output_str}")
```
# **About AI4Bharat**
- Website: https://ai4bharat.org/
- Code: https://github.com/AI4Bharat
- HuggingFace: https://huggingface.co/ai4bharat |
ArBert/albert-base-v2-finetuned-ner | d95acee469161c82720bc85773685f8a8e9c60ac | 2022-02-03T14:26:33.000Z | [
"pytorch",
"tensorboard",
"albert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ArBert | null | ArBert/albert-base-v2-finetuned-ner | 55 | 1 | transformers | 5,806 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: albert-base-v2-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9301181102362205
- name: Recall
type: recall
value: 0.9376033513394334
- name: F1
type: f1
value: 0.9338457315399397
- name: Accuracy
type: accuracy
value: 0.9851613086447802
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-ner
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0700
- Precision: 0.9301
- Recall: 0.9376
- F1: 0.9338
- Accuracy: 0.9852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.096 | 1.0 | 1756 | 0.0752 | 0.9163 | 0.9201 | 0.9182 | 0.9811 |
| 0.0481 | 2.0 | 3512 | 0.0761 | 0.9169 | 0.9293 | 0.9231 | 0.9830 |
| 0.0251 | 3.0 | 5268 | 0.0700 | 0.9301 | 0.9376 | 0.9338 | 0.9852 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Bagus/wav2vec2-xlsr-greek-speech-emotion-recognition | cbe6ba0246652df652d2ad88baa1cad77f180aa4 | 2021-10-20T05:38:41.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"el",
"dataset:aesdd",
"transformers",
"audio",
"audio-classification",
"speech",
"license:apache-2.0"
] | audio-classification | false | Bagus | null | Bagus/wav2vec2-xlsr-greek-speech-emotion-recognition | 55 | null | transformers | 5,807 | ---
language: el
datasets:
- aesdd
tags:
- audio
- audio-classification
- speech
license: apache-2.0
---
~~~
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
!git clone https://github.com/m3hrdadfi/soxan
cd soxan
~~~
# prediction
~~~
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchaudio
from transformers import AutoConfig, Wav2Vec2FeatureExtractor
import librosa
import IPython.display as ipd
import numpy as np
import pandas as pd
~~~
~~~
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_name_or_path = "Bagus/wav2vec2-xlsr-greek-speech-emotion-recognition"
config = AutoConfig.from_pretrained(model_name_or_path)
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path)
sampling_rate = feature_extractor.sampling_rate
model = Wav2Vec2ForSpeechClassification.from_pretrained(model_name_or_path).to(device)
~~~
~~~
def speech_file_to_array_fn(path, sampling_rate):
speech_array, _sampling_rate = torchaudio.load(path)
resampler = torchaudio.transforms.Resample(_sampling_rate)
speech = resampler(speech_array).squeeze().numpy()
return speech
def predict(path, sampling_rate):
speech = speech_file_to_array_fn(path, sampling_rate)
inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True)
inputs = {key: inputs[key].to(device) for key in inputs}
with torch.no_grad():
logits = model(**inputs).logits
scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0]
outputs = [{"Emotion": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)]
return outputs
~~~
# prediction
~~~
# path for a sample
path = '/data/jtes_v1.1/wav/f01/ang/f01_ang_01.wav'
outputs = predict(path, sampling_rate)
~~~
~~~
[{'Emotion': 'anger', 'Score': '98.3%'},
{'Emotion': 'disgust', 'Score': '0.0%'},
{'Emotion': 'fear', 'Score': '0.4%'},
{'Emotion': 'happiness', 'Score': '0.7%'},
{'Emotion': 'sadness', 'Score': '0.5%'}]
~~~ |
Finnish-NLP/electra-base-discriminator-finnish | cea3059be27d2b56aeae92e58e92b8fbbfd62f44 | 2022-06-13T16:14:27.000Z | [
"pytorch",
"tensorboard",
"electra",
"pretraining",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"transformers",
"finnish",
"license:apache-2.0"
] | null | false | Finnish-NLP | null | Finnish-NLP/electra-base-discriminator-finnish | 55 | 1 | transformers | 5,808 | ---
language:
- fi
license: apache-2.0
tags:
- finnish
- electra
datasets:
- Finnish-NLP/mc4_fi_cleaned
- wikipedia
---
# ELECTRA for Finnish
Pretrained ELECTRA model on Finnish language using a replaced token detection (RTD) objective. ELECTRA was introduced in
[this paper](https://openreview.net/pdf?id=r1xMH1BtvB)
and first released at [this page](https://github.com/google-research/electra).
**Note**: this model is the ELECTRA discriminator model intented to be used for fine-tuning on downstream tasks like text classification. The ELECTRA generator model intented to be used for fill-mask task is released here [Finnish-NLP/electra-base-generator-finnish](https://huggingface.co/Finnish-NLP/electra-base-generator-finnish)
## Model description
Finnish ELECTRA is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN).
This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ELECTRA model as inputs.
## Intended uses & limitations
You can use the raw model for extracting features or fine-tune it to a downstream task like text classification.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import ElectraTokenizer, ElectraModel
import torch
tokenizer = ElectraTokenizer.from_pretrained("Finnish-NLP/electra-base-discriminator-finnish")
model = ElectraModel.from_pretrained("Finnish-NLP/electra-base-discriminator-finnish")
inputs = tokenizer("Joka kuuseen kurkottaa, se katajaan kapsahtaa", return_tensors="pt")
outputs = model(**inputs)
print(outputs.last_hidden_state)
```
and in TensorFlow:
```python
from transformers import ElectraTokenizer, TFElectraModel
tokenizer = ElectraTokenizer.from_pretrained("Finnish-NLP/electra-base-discriminator-finnish")
model = TFElectraModel.from_pretrained("Finnish-NLP/electra-base-discriminator-finnish", from_pt=True)
inputs = tokenizer("Joka kuuseen kurkottaa, se katajaan kapsahtaa", return_tensors="tf")
outputs = model(inputs)
print(outputs.last_hidden_state)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish ELECTRA model was pretrained on the combination of five datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 1M steps. The optimizer used was a AdamW with learning rate 2e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after.
Training code was from the official [ELECTRA repository](https://github.com/google-research/electra) and also some instructions was used from [here](https://github.com/stefan-it/turkish-bert/blob/master/electra/CHEATSHEET.md).
## Evaluation results
Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model and to our other models:
| | Average | Yle News 128 length | Yle News 512 length | Eduskunta 128 length |
|-----------------------------------------------|----------|---------------------|---------------------|----------------------|
|Finnish-NLP/electra-base-discriminator-finnish |86.25 |93.78 |94.77 |70.20 |
|Finnish-NLP/convbert-base-finnish |86.98 |94.04 |95.02 |71.87 |
|Finnish-NLP/roberta-large-wechsel-finnish |88.19 |**94.91** |95.18 |74.47 |
|Finnish-NLP/roberta-large-finnish-v2 |88.17 |94.46 |95.22 |74.83 |
|Finnish-NLP/roberta-large-finnish |88.02 |94.53 |95.23 |74.30 |
|TurkuNLP/bert-base-finnish-cased-v1 |**88.82** |94.90 |**95.49** |**76.07** |
To conclude, this ELECTRA model loses to other models but is still fairly competitive compared to our roberta-large models when taking into account that this ELECTRA model has 110M parameters when roberta-large models have 355M parameters.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 |
Helsinki-NLP/opus-mt-en-tw | 1bb6b4f687e20670094c8ac30048119e2a2ce972 | 2021-09-09T21:40:21.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"tw",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-tw | 55 | null | transformers | 5,809 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-tw
* source languages: en
* target languages: tw
* OPUS readme: [en-tw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tw/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tw/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tw/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tw | 38.2 | 0.577 |
|
KoichiYasuoka/roberta-base-english-upos | d0917602cc2cd705ffd94685e0206c3b3a83966a | 2022-02-16T03:13:03.000Z | [
"pytorch",
"roberta",
"token-classification",
"en",
"dataset:universal_dependencies",
"transformers",
"english",
"pos",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/roberta-base-english-upos | 55 | null | transformers | 5,810 | ---
language:
- "en"
tags:
- "english"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
---
# roberta-base-english-upos
## Model Description
This is a RoBERTa model pre-trained with [UD_English](https://universaldependencies.org/en/) for POS-tagging and dependency-parsing, derived from [roberta-base](https://huggingface.co/roberta-base). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-english-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-english-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-base-english-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
amitness/nepbert | 929fb302536e823a8ec7a5c6c65ff01f99845f70 | 2021-09-21T16:00:56.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"ne",
"dataset:cc100",
"transformers",
"nepali-laguage-model",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | amitness | null | amitness/nepbert | 55 | null | transformers | 5,811 | ---
language:
- ne
thumbnail:
tags:
- roberta
- nepali-laguage-model
license: mit
datasets:
- cc100
widget:
- text: तिमीलाई कस्तो <mask>?
---
# nepbert
## Model description
Roberta trained from scratch on the Nepali CC-100 dataset with 12 million sentences.
## Intended uses & limitations
#### How to use
```python
from transformers import pipeline
pipe = pipeline(
"fill-mask",
model="amitness/nepbert",
tokenizer="amitness/nepbert"
)
print(pipe(u"तिमीलाई कस्तो <mask>?"))
```
## Training data
The data was taken from the nepali language subset of CC-100 dataset.
## Training procedure
The model was trained on Google Colab using `1x Tesla V100`. |
google/tapas-large-finetuned-tabfact | fdefc8e307f981aba4b8eb772d0f7a884ed24770 | 2021-11-29T13:21:34.000Z | [
"pytorch",
"tf",
"tapas",
"text-classification",
"en",
"dataset:tab_fact",
"arxiv:2010.00571",
"arxiv:2004.02349",
"transformers",
"sequence-classification",
"license:apache-2.0"
] | text-classification | false | google | null | google/tapas-large-finetuned-tabfact | 55 | null | transformers | 5,812 | ---
language: en
tags:
- tapas
- sequence-classification
license: apache-2.0
datasets:
- tab_fact
---
# TAPAS large model fine-tuned on Tabular Fact Checking (TabFact)
This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_tabfact_inter_masklm_large_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is the one with absolute position embeddings:
- `no_reset`, which corresponds to `tapas_tabfact_inter_masklm_large`
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a classification head on top of the pre-trained model, and then
jointly train this randomly initialized classification head with the base model on TabFact.
## Intended uses & limitations
You can use this model for classifying whether a sentence is supported or refuted by the contents of a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence [SEP] Flattened table [SEP]
```
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 80,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 14 hours. The optimizer used is Adam with a learning rate of 2e-5, and a warmup
ratio of 0.05. See the [paper](https://arxiv.org/abs/2010.00571) for more details (appendix A2).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@inproceedings{2019TabFactA,
title={TabFact : A Large-scale Dataset for Table-based Fact Verification},
author={Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou and William Yang Wang},
booktitle = {International Conference on Learning Representations (ICLR)},
address = {Addis Ababa, Ethiopia},
month = {April},
year = {2020}
}
``` |
indonesian-nlp/gpt2 | 0995ba6b42b12180954cdc68b7e08fa7bc6daae6 | 2022-02-15T17:31:03.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"id",
"transformers"
] | text-generation | false | indonesian-nlp | null | indonesian-nlp/gpt2 | 55 | 1 | transformers | 5,813 | ---
language: id
widget:
- text: "Sewindu sudah kita tak berjumpa, rinduku padamu sudah tak terkira."
---
# GPT2-small-indonesian
This is a pretrained model on Indonesian language using a causal language modeling (CLM) objective, which was first
introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104)
organized by [HuggingFace](https://huggingface.co). All training was done on a TPUv3-8 VM sponsored by the Google Cloud team.
The demo can be found [here](https://huggingface.co/spaces/flax-community/gpt2-indonesian).
## How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness,
we set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='flax-community/gpt2-small-indonesian')
>>> set_seed(42)
>>> generator("Sewindu sudah kita tak berjumpa,", max_length=30, num_return_sequences=5)
[{'generated_text': 'Sewindu sudah kita tak berjumpa, dua dekade lalu, saya hanya bertemu sekali. Entah mengapa, saya lebih nyaman berbicara dalam bahasa Indonesia, bahasa Indonesia'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, tapi dalam dua hari ini, kita bisa saja bertemu.”\
“Kau tau, bagaimana dulu kita bertemu?” aku'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, banyak kisah yang tersimpan. Tak mudah tuk kembali ke pelukan, di mana kini kita berada, sebuah tempat yang jauh'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, sejak aku lulus kampus di Bandung, aku sempat mencari kabar tentangmu. Ah, masih ada tempat di hatiku,'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, tapi Tuhan masih saja menyukarkan doa kita masing-masing.\
Tuhan akan memberi lebih dari apa yang kita'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('flax-community/gpt2-small-indonesian')
model = GPT2Model.from_pretrained('flax-community/gpt2-small-indonesian')
text = "Ubah dengan teks apa saja."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('flax-community/gpt2-small-indonesian')
model = TFGPT2Model.from_pretrained('flax-community/gpt2-small-indonesian')
text = "Ubah dengan teks apa saja."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Limitations and bias
The training data used for this model are Indonesian websites of [OSCAR](https://oscar-corpus.com/),
[mc4](https://huggingface.co/datasets/mc4) and [Wikipedia](https://huggingface.co/datasets/wikipedia). The datasets
contain a lot of unfiltered content from the internet, which is far from neutral. While we have done some filtering on
the dataset (see the **Training data** section), the filtering is by no means a thorough mitigation of biased content
that is eventually used by the training data. These biases might also affect models that are fine-tuned using this model.
As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we
> do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry
> out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender,
> race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with
> similar levels of caution around use cases that are sensitive to biases around human attributes.
We have done a basic bias analysis that you can find in this [notebook](https://huggingface.co/flax-community/gpt2-small-indonesian/blob/main/bias_analysis/gpt2_medium_indonesian_bias_analysis.ipynb), performed on [Indonesian GPT2 medium](https://huggingface.co/flax-community/gpt2-medium-indonesian), based on the bias analysis for [Polish GPT2](https://huggingface.co/flax-community/papuGaPT2) with modifications.
### Gender bias
We generated 50 texts starting with prompts "She/He works as". After doing some preprocessing (lowercase and stopwords removal) we obtain texts that are used to generate word clouds of female/male professions. The most salient terms for male professions are: driver, sopir (driver), ojek, tukang, online.

The most salient terms for female professions are: pegawai (employee), konsultan (consultant), asisten (assistant).

### Ethnicity bias
We generated 1,200 texts to assess bias across ethnicity and gender vectors. We will create prompts with the following scheme:
* Person - we will assess 5 ethnicities: Sunda, Batak, Minahasa, Dayak, Asmat, Neutral (no ethnicity)
* Topic - we will use 5 different topics:
* random act: *entered home*
* said: *said*
* works as: *works as*
* intent: *let [person] ...*
* define: *is*
Sample of generated prompt: "seorang perempuan sunda masuk ke rumah..." (a Sundanese woman enters the house...)
We used a [model](https://huggingface.co/Hate-speech-CNERG/dehatebert-mono-indonesian) trained on Indonesian hate speech corpus ([dataset 1](https://github.com/okkyibrohim/id-multi-label-hate-speech-and-abusive-language-detection), [dataset 2](https://github.com/ialfina/id-hatespeech-detection)) to obtain the probability that each generated text contains hate speech. To avoid leakage, we removed the first word identifying the ethnicity and gender from the generated text before running the hate speech detector.
The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some ethnicities score higher than the neutral baseline.

### Religion bias
With the same methodology above, we generated 1,400 texts to assess bias across religion and gender vectors. We will assess 6 religions: Islam, Protestan (Protestant), Katolik (Catholic), Buddha (Buddhism), Hindu (Hinduism), and Khonghucu (Confucianism) with Neutral (no religion) as a baseline.
The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some religions score higher than the neutral baseline.

## Training data
The model was trained on a combined dataset of [OSCAR](https://oscar-corpus.com/), [mc4](https://huggingface.co/datasets/mc4)
and Wikipedia for the Indonesian language. We have filtered and reduced the mc4 dataset so that we end up with 29 GB
of data in total. The mc4 dataset was cleaned using [this filtering script](https://github.com/Wikidepia/indonesian_datasets/blob/master/dump/mc4/cleanup.py)
and we also only included links that have been cited by the Indonesian Wikipedia.
## Training procedure
The model was trained on a TPUv3-8 VM provided by the Google Cloud team. The training duration was `4d 14h 50m 47s`.
### Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| dataset | train loss | eval loss | eval perplexity |
| ---------- | ---------- | -------------- | ---------- |
| ID OSCAR+mc4+wikipedia (29GB) | 3.046 | 2.926 | 18.66 |
### Tracking
The training process was tracked in [TensorBoard](https://huggingface.co/flax-community/gpt2-small-indonesian/tensorboard) and [Weights and Biases](https://wandb.ai/wandb/hf-flax-gpt2-indonesian?workspace=user-cahya).
## Team members
- Akmal ([@Wikidepia](https://huggingface.co/Wikidepia))
- alvinwatner ([@alvinwatner](https://huggingface.co/alvinwatner))
- Cahya Wirawan ([@cahya](https://huggingface.co/cahya))
- Galuh Sahid ([@Galuh](https://huggingface.co/Galuh))
- Muhammad Agung Hambali ([@AyameRushia](https://huggingface.co/AyameRushia))
- Muhammad Fhadli ([@muhammadfhadli](https://huggingface.co/muhammadfhadli))
- Samsul Rahmadani ([@munggok](https://huggingface.co/munggok))
## Future work
We would like to pre-train further the models with larger and cleaner datasets and fine-tune it to specific domains
if we can get the necessary hardware resources. |
kornosk/bert-election2020-twitter-stance-biden | d74e980707975d786a5bda3528cce403edddb804 | 2022-05-02T22:59:23.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"transformers",
"twitter",
"stance-detection",
"election2020",
"politics",
"license:gpl-3.0"
] | text-classification | false | kornosk | null | kornosk/bert-election2020-twitter-stance-biden | 55 | 2 | transformers | 5,814 | ---
language: "en"
tags:
- twitter
- stance-detection
- election2020
- politics
license: "gpl-3.0"
---
# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Joe Biden (f-BERT)
Pre-trained weights for **f-BERT** in [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Training Data
This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our [stance-labeled data](https://github.com/GU-DataLab/stance-detection-KE-MLM) for stance detection towards Joe Biden.
# Training Objective
This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Joe Biden.
# Usage
This pre-trained language model is fine-tuned to the stance detection task specifically for Joe Biden.
Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import numpy as np
# choose GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# select mode path here
pretrained_LM_path = "kornosk/bert-election2020-twitter-stance-biden"
# load model
tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path)
model = AutoModelForSequenceClassification.from_pretrained(pretrained_LM_path)
id2label = {
0: "AGAINST",
1: "FAVOR",
2: "NONE"
}
##### Prediction Neutral #####
sentence = "Hello World."
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
##### Prediction Favor #####
sentence = "Go Go Biden!!!"
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
##### Prediction Against #####
sentence = "Biden is the worst."
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
# please consider citing our paper if you feel this is useful :)
```
# Reference
- [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Citation
```bibtex
@inproceedings{kawintiranon2021knowledge,
title={Knowledge Enhanced Masked Language Model for Stance Detection},
author={Kawintiranon, Kornraphop and Singh, Lisa},
booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
year={2021},
publisher={Association for Computational Linguistics},
url={https://www.aclweb.org/anthology/2021.naacl-main.376}
}
``` |
monologg/koelectra-small-finetuned-sentiment | 1dd80335c9f7861d2bdc01c7712301c38a23b6f7 | 2020-05-23T09:19:14.000Z | [
"pytorch",
"tflite",
"electra",
"text-classification",
"transformers"
] | text-classification | false | monologg | null | monologg/koelectra-small-finetuned-sentiment | 55 | null | transformers | 5,815 | Entry not found |
philschmid/distilroberta-base-ner-wikiann-conll2003-3-class | cc95e1ad7cb7ca6f393e1c4bf2b44aab8ab23b4a | 2021-05-26T14:13:00.000Z | [
"pytorch",
"roberta",
"token-classification",
"dataset:wikiann-conll2003",
"transformers",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | philschmid | null | philschmid/distilroberta-base-ner-wikiann-conll2003-3-class | 55 | 2 | transformers | 5,816 | ---
license: apache-2.0
tags:
- token-classification
datasets:
- wikiann-conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilroberta-base-ner-wikiann-conll2003-3-class
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann-conll2003
type: wikiann-conll2003
metrics:
- name: Precision
type: precision
value: 0.9624757386241104
- name: Recall
type: recall
value: 0.9667497021553124
- name: F1
type: f1
value: 0.964607986167396
- name: Accuracy
type: accuracy
value: 0.9913626461292995
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-ner-wikiann-conll2003-3-class
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the wikiann and conll2003 dataset. It consists out of the classes of wikiann.
O (0), B-PER (1), I-PER (2), B-ORG (3), I-ORG (4) B-LOC (5), I-LOC (6).
eval F1-Score: **96,25** (merged dataset)
test F1-Score: **92,41** (merged dataset)
## Model Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("philschmid/distilroberta-base-ner-wikiann-conll2003-3-class")
model = AutoModelForTokenClassification.from_pretrained("philschmid/distilroberta-base-ner-wikiann-conll2003-3-class")
nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True)
example = "My name is Philipp and live in Germany"
nlp(example)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.9086903597787154e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
It achieves the following results on the evaluation set:
- Loss: 0.0520
- Precision: 0.9625
- Recall: 0.9667
- F1: 0.9646
- Accuracy: 0.9914
It achieves the following results on the test set:
- Loss: 0.141
- Precision: 0.917
- Recall: 0.9313
- F1: 0.9241
- Accuracy: 0.9807
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.6.2
- Tokenizers 0.10.3
|
mindee/fasterrcnn_mobilenet_v3_large_fpn | 36909b9a601bb45bf3154ca176c1cb63477c54b3 | 2022-03-11T09:33:24.000Z | [
"pytorch",
"dataset:docartefacts",
"arxiv:1506.01497",
"doctr",
"object-detection",
"license:apache-2.0"
] | object-detection | false | mindee | null | mindee/fasterrcnn_mobilenet_v3_large_fpn | 55 | 3 | doctr | 5,817 | ---
license: apache-2.0
tags:
- object-detection
- pytorch
library_name: doctr
datasets:
- docartefacts
---
# Faster-RCNN model
Pretrained on [DocArtefacts](https://mindee.github.io/doctr/datasets.html#doctr.datasets.DocArtefacts). The Faster-RCNN architecture was introduced in [this paper](https://arxiv.org/pdf/1506.01497.pdf).
## Model description
The core idea of the author is to unify Region Proposal with the core detection module of Fast-RCNN.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/) are required to install docTR.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/python-doctr/) as follows:
```shell
pip install python-doctr[torch]
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/mindee/doctr.git
pip install -e doctr/.[torch]
```
## Usage instructions
```python
from PIL import Image
import torch
from torchvision.transforms import Compose, ConvertImageDtype, PILToTensor
from doctr.models.obj_detection.factory import from_hub
model = from_hub("mindee/fasterrcnn_mobilenet_v3_large_fpn").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
transform = Compose([
PILToTensor(),
ConvertImageDtype(torch.float32),
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/RenHG015,
author = {Shaoqing Ren and
Kaiming He and
Ross B. Girshick and
Jian Sun},
title = {Faster {R-CNN:} Towards Real-Time Object Detection with Region Proposal
Networks},
journal = {CoRR},
volume = {abs/1506.01497},
year = {2015},
url = {http://arxiv.org/abs/1506.01497},
eprinttype = {arXiv},
eprint = {1506.01497},
timestamp = {Mon, 13 Aug 2018 16:46:02 +0200},
biburl = {https://dblp.org/rec/journals/corr/RenHG015.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@misc{doctr2021,
title={docTR: Document Text Recognition},
author={Mindee},
year={2021},
publisher = {GitHub},
howpublished = {\url{https://github.com/mindee/doctr}}
}
```
|
ali2066/bert-base-uncased_token_itr0_2e-05_all_01_03_2022-04_40_10 | cc565eaf6b3a035b0ee9155e6073a36ad9cef388 | 2022-03-01T03:43:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/bert-base-uncased_token_itr0_2e-05_all_01_03_2022-04_40_10 | 55 | null | transformers | 5,818 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-uncased_token_itr0_2e-05_all_01_03_2022-04_40_10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased_token_itr0_2e-05_all_01_03_2022-04_40_10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2741
- Precision: 0.1936
- Recall: 0.3243
- F1: 0.2424
- Accuracy: 0.8764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.3235 | 0.1062 | 0.2076 | 0.1405 | 0.8556 |
| No log | 2.0 | 60 | 0.2713 | 0.1710 | 0.3080 | 0.2199 | 0.8872 |
| No log | 3.0 | 90 | 0.3246 | 0.2010 | 0.3391 | 0.2524 | 0.8334 |
| No log | 4.0 | 120 | 0.3008 | 0.2011 | 0.3685 | 0.2602 | 0.8459 |
| No log | 5.0 | 150 | 0.2714 | 0.1780 | 0.3772 | 0.2418 | 0.8661 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27 | ac80b1a3ff58c13dc4405237763579c509406dea | 2022-03-01T03:51:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27 | 55 | null | transformers | 5,819 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2899
- Precision: 0.3170
- Recall: 0.5261
- F1: 0.3956
- Accuracy: 0.8799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.2912 | 0.2752 | 0.4444 | 0.3400 | 0.8730 |
| No log | 2.0 | 60 | 0.2772 | 0.4005 | 0.4589 | 0.4277 | 0.8911 |
| No log | 3.0 | 90 | 0.2267 | 0.3642 | 0.5281 | 0.4311 | 0.9043 |
| No log | 4.0 | 120 | 0.2129 | 0.3617 | 0.5455 | 0.4350 | 0.9140 |
| No log | 5.0 | 150 | 0.2399 | 0.3797 | 0.5556 | 0.4511 | 0.9114 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | b40472575e770ed814410816176942aab327c602 | 2022-05-26T12:49:05.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"transformers",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Finnish-NLP | null | Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 55 | 1 | transformers | 5,820 | ---
license: apache-2.0
language: fi
metrics:
- wer
- cer
tags:
- automatic-speech-recognition
- fi
- finnish
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xlsr-1b-finnish-lm-v2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fi
metrics:
- name: Test WER
type: wer
value: 4.09
- name: Test CER
type: cer
value: 0.88
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: FLEURS ASR
type: google/fleurs
args: fi_fi
metrics:
- name: Test WER
type: wer
value: 12.11
- name: Test CER
type: cer
value: 5.65
---
# Wav2vec2-xls-r-1b for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 275.6 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
This repository also includes Finnish KenLM language model used in the decoding phase with the acoustic model.
**Note**: this model is exactly the same as the [aapot/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish-lm-v2) model so that model has just been copied/moved to this `Finnish-NLP` Hugging Face organization.
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (1 billion parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
The Finnish KenLM language model used in the decoding phase has been trained with text data from the audio transcriptions and from a subset of Finnish Wikipedia. Thus, the decoder's language model may not generalize to very different language, for example to spoken daily language with dialects (because especially the Wikipedia contains mostly formal Finnish language). It may be beneficial to train your own KenLM language model for your domain language and use that in the decoding.
## Training data
This model was fine-tuned with 275.6 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:------------------------------------------------------------------------------------------------------------------------------ |:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.52 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.97 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.74 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.73 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.95 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
For the KenLM language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by Hugging Face. Training data for the 5-gram KenLM were text transcriptions of the audio training data and 100k random samples of cleaned [Finnish Wikipedia](https://huggingface.co/datasets/wikipedia) (August 2021) dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-1b` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.7778 | 0.17 | 500 | 0.2851 | 0.3572 |
| 0.5506 | 0.34 | 1000 | 0.1595 | 0.2130 |
| 0.6569 | 0.5 | 1500 | 0.1458 | 0.2046 |
| 0.5997 | 0.67 | 2000 | 0.1374 | 0.1975 |
| 0.542 | 0.84 | 2500 | 0.1390 | 0.1956 |
| 0.4815 | 1.01 | 3000 | 0.1266 | 0.1813 |
| 0.6982 | 1.17 | 3500 | 0.1441 | 0.1965 |
| 0.4522 | 1.34 | 4000 | 0.1232 | 0.1822 |
| 0.4655 | 1.51 | 4500 | 0.1209 | 0.1702 |
| 0.4069 | 1.68 | 5000 | 0.1149 | 0.1688 |
| 0.4226 | 1.84 | 5500 | 0.1121 | 0.1560 |
| 0.3993 | 2.01 | 6000 | 0.1091 | 0.1557 |
| 0.406 | 2.18 | 6500 | 0.1115 | 0.1553 |
| 0.4098 | 2.35 | 7000 | 0.1144 | 0.1560 |
| 0.3995 | 2.51 | 7500 | 0.1028 | 0.1476 |
| 0.4101 | 2.68 | 8000 | 0.1129 | 0.1511 |
| 0.3636 | 2.85 | 8500 | 0.1025 | 0.1517 |
| 0.3534 | 3.02 | 9000 | 0.1068 | 0.1480 |
| 0.3836 | 3.18 | 9500 | 0.1072 | 0.1459 |
| 0.3531 | 3.35 | 10000 | 0.0928 | 0.1367 |
| 0.3649 | 3.52 | 10500 | 0.1042 | 0.1426 |
| 0.3645 | 3.69 | 11000 | 0.0979 | 0.1433 |
| 0.3685 | 3.85 | 11500 | 0.0947 | 0.1346 |
| 0.3325 | 4.02 | 12000 | 0.0991 | 0.1352 |
| 0.3497 | 4.19 | 12500 | 0.0919 | 0.1358 |
| 0.3303 | 4.36 | 13000 | 0.0888 | 0.1272 |
| 0.3323 | 4.52 | 13500 | 0.0888 | 0.1277 |
| 0.3452 | 4.69 | 14000 | 0.0894 | 0.1279 |
| 0.337 | 4.86 | 14500 | 0.0917 | 0.1289 |
| 0.3114 | 5.03 | 15000 | 0.0942 | 0.1313 |
| 0.3099 | 5.19 | 15500 | 0.0902 | 0.1239 |
| 0.3079 | 5.36 | 16000 | 0.0871 | 0.1256 |
| 0.3293 | 5.53 | 16500 | 0.0861 | 0.1263 |
| 0.3123 | 5.7 | 17000 | 0.0876 | 0.1203 |
| 0.3093 | 5.86 | 17500 | 0.0848 | 0.1226 |
| 0.2903 | 6.03 | 18000 | 0.0914 | 0.1221 |
| 0.297 | 6.2 | 18500 | 0.0841 | 0.1185 |
| 0.2797 | 6.37 | 19000 | 0.0858 | 0.1165 |
| 0.2878 | 6.53 | 19500 | 0.0874 | 0.1161 |
| 0.2974 | 6.7 | 20000 | 0.0835 | 0.1173 |
| 0.3051 | 6.87 | 20500 | 0.0835 | 0.1178 |
| 0.2941 | 7.04 | 21000 | 0.0852 | 0.1155 |
| 0.258 | 7.21 | 21500 | 0.0832 | 0.1132 |
| 0.2778 | 7.37 | 22000 | 0.0829 | 0.1110 |
| 0.2751 | 7.54 | 22500 | 0.0822 | 0.1069 |
| 0.2887 | 7.71 | 23000 | 0.0819 | 0.1103 |
| 0.2509 | 7.88 | 23500 | 0.0787 | 0.1055 |
| 0.2501 | 8.04 | 24000 | 0.0807 | 0.1076 |
| 0.2399 | 8.21 | 24500 | 0.0784 | 0.1052 |
| 0.2539 | 8.38 | 25000 | 0.0772 | 0.1075 |
| 0.248 | 8.55 | 25500 | 0.0772 | 0.1055 |
| 0.2689 | 8.71 | 26000 | 0.0763 | 0.1027 |
| 0.2855 | 8.88 | 26500 | 0.0756 | 0.1035 |
| 0.2421 | 9.05 | 27000 | 0.0771 | 0.0998 |
| 0.2497 | 9.22 | 27500 | 0.0756 | 0.0971 |
| 0.2367 | 9.38 | 28000 | 0.0741 | 0.0974 |
| 0.2473 | 9.55 | 28500 | 0.0739 | 0.0982 |
| 0.2396 | 9.72 | 29000 | 0.0756 | 0.0991 |
| 0.2602 | 9.89 | 29500 | 0.0737 | 0.0975 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0), [Common Voice 9.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0) and with the [FLEURS ASR Finnish test split](https://huggingface.co/datasets/google/fleurs).
This model's training data includes the training splits of Common Voice 7.0 but our newer `Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned` and `Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish` models include the Common Voice 9.0 so we ran tests for both Common Voice versions. Note: Common Voice doesn't seem to fully preserve the test split as fixed between the dataset versions so it is possible that some of the training examples of Common Voice 9.0 are in the test split of the Common Voice 7.0 and vice versa. Thus, Common Voice test result comparisons are not fully accurate between the models trained with different Common Voice versions but the comparison should still be meaningful enough.
### Common Voice 7.0 testing
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the fift row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts:
| | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------|
|Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |5.85 |13.52 |1.35 |2.44 |
|Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |4.13 |**9.66** |0.90 |1.66 |
|Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |8.16 |17.92 |1.97 |3.36 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |5.65 |13.11 |1.20 |2.23 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**4.09** |9.73 |**0.88** |**1.65** |
### Common Voice 9.0 testing
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 --dataset mozilla-foundation/common_voice_9_0 --config fi --split test
```
This model (the fift row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts:
| | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------|
|Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |5.93 |14.08 |1.40 |2.59 |
|Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |4.13 |9.83 |0.92 |1.71 |
|Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |7.42 |16.45 |1.79 |3.07 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |5.35 |13.00 |1.14 |2.20 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**3.72** |**8.96** |**0.80** |**1.52** |
### FLEURS ASR testing
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 --dataset google/fleurs --config fi_fi --split test
```
This model (the fift row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts:
| | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------|
|Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |13.99 |17.16 |6.07 |6.61 |
|Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |12.44 |**14.63** |5.77 |6.22 |
|Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |17.72 |23.30 |6.78 |7.67 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |20.34 |16.67 |6.97 |6.35 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**12.11** |14.89 |**5.65** |**6.06** |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 |
hamzab/roberta-fake-news-classification | 331696d6bd2286f88fe0ec364b3e31ef0b7042b3 | 2022-04-07T13:25:28.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:fake-and-real-news-dataset on kaggle",
"transformers",
"classification",
"license:mit"
] | text-classification | false | hamzab | null | hamzab/roberta-fake-news-classification | 55 | null | transformers | 5,821 | ---
license: mit
widget:
- text: "Some ninja attacked the White House."
example_title: "Fake example 1"
language:
- en
tags:
- classification
datasets:
- "fake-and-real-news-dataset on kaggle"
---
## Overview
The model is a `roberta-base` fine-tuned on [fake-and-real-news-dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset). It has a 100% accuracy on that dataset.
The model takes a news article and predicts if it is true or fake.
The format of the input should be:
```
<title> TITLE HERE <content> CONTENT HERE <end>
```
## Using this model in your code
To use this model, first download it from the hugginface website:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("hamzab/roberta-fake-news-classification")
model = AutoModelForSequenceClassification.from_pretrained("hamzab/roberta-fake-news-classification")
```
Then, make a prediction like follows:
```python
import torch
def predict_fake(title,text):
input_str = "<title>" + title + "<content>" + text + "<end>"
input_ids = tokenizer.encode_plus(input_str, max_length=512, padding="max_length", truncation=True, return_tensors="pt")
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model.to(device)
with torch.no_grad():
output = model(input_ids["input_ids"].to(device), attention_mask=input_ids["attention_mask"].to(device))
return dict(zip(["Fake","Real"], [x.item() for x in list(torch.nn.Softmax()(output.logits)[0])] ))
print(predict_fake(<HEADLINE-HERE>,<CONTENT-HERE>))
```
You can also use Gradio to test the model on real-time:
```python
import gradio as gr
iface = gr.Interface(fn=predict_fake, inputs=[gr.inputs.Textbox(lines=1,label="headline"),gr.inputs.Textbox(lines=6,label="content")], outputs="label").launch(share=True)
``` |
NCAI/NCAI-BERT | 889c339b60ab295438ec9c2d6df001c0895f0209 | 2022-04-13T11:27:59.000Z | [
"pytorch",
"lean_albert",
"transformers"
] | null | false | NCAI | null | NCAI/NCAI-BERT | 55 | null | transformers | 5,822 | Entry not found |
prithivida/bert-for-patents-64d | a337ab997cdb5f9cbf9fc18568e5cd254cc2c1c4 | 2022-06-29T07:47:23.000Z | [
"pytorch",
"tf",
"bert",
"feature-extraction",
"en",
"transformers",
"masked-lm",
"license:apache-2.0"
] | feature-extraction | false | prithivida | null | prithivida/bert-for-patents-64d | 55 | 2 | transformers | 5,823 | ---
language:
- en
tags:
- masked-lm
- pytorch
pipeline-tag: "fill-mask"
mask-token: "[MASK]"
widget:
- text: "The present [MASK] provides a torque sensor that is small and highly rigid and for which high production efficiency is possible."
- text: "The present invention relates to [MASK] accessories and pertains particularly to a brake light unit for bicycles."
- text: "The present invention discloses a space-bound-free [MASK] and its coordinate determining circuit for determining a coordinate of a stylus pen."
- text: "The illuminated [MASK] includes a substantially translucent canopy supported by a plurality of ribs pivotally swingable towards and away from a shaft."
license: apache-2.0
metrics:
- perplexity
---
# Motivation
This model is based on anferico/bert-for-patents - a BERT<sub>LARGE</sub> model (See next section for details below). By default, the pre-trained model's output embeddings with size 768 (base-models) or with size 1024 (large-models). However, when you store Millions of embeddings, this can require quite a lot of memory/storage. So have reduced the embedding dimension to 64 i.e 1/16th of 1024 using Principle Component Analysis (PCA) and it still gives a comparable performance. Yes! PCA gives better performance than NMF. Note: This process neither improves the runtime, nor the memory requirement for running the model. It only reduces the needed space to store embeddings, for example, for semantic search using vector databases.
# BERT for Patents
BERT for Patents is a model trained by Google on 100M+ patents (not just US patents).
If you want to learn more about the model, check out the [blog post](https://cloud.google.com/blog/products/ai-machine-learning/how-ai-improves-patent-analysis), [white paper](https://services.google.com/fh/files/blogs/bert_for_patents_white_paper.pdf) and [GitHub page](https://github.com/google/patents-public-data/blob/master/models/BERT%20for%20Patents.md) containing the original TensorFlow checkpoint.
---
### Projects using this model (or variants of it):
- [Patents4IPPC](https://github.com/ec-jrc/Patents4IPPC) (carried out by [Pi School](https://picampus-school.com/) and commissioned by the [Joint Research Centre (JRC)](https://ec.europa.eu/jrc/en) of the European Commission)
|
abd-1999/autotrain-bbc-news-summarization-694821095 | 9e36cfeccfe662df7f11b6dcd932859fab347419 | 2022-04-03T09:25:08.000Z | [
"pytorch",
"t5",
"text2text-generation",
"unk",
"dataset:abd-1999/autotrain-data-bbc-news-summarization",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | abd-1999 | null | abd-1999/autotrain-bbc-news-summarization-694821095 | 55 | 1 | transformers | 5,824 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- abd-1999/autotrain-data-bbc-news-summarization
co2_eq_emissions: 2313.4037079026934
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 694821095
- CO2 Emissions (in grams): 2313.4037079026934
## Validation Metrics
- Loss: 3.0294156074523926
- Rouge1: 2.1467
- Rouge2: 0.0853
- RougeL: 2.1524
- RougeLsum: 2.1534
- Gen Len: 18.5603
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/abd-1999/autotrain-bbc-news-summarization-694821095
``` |
AnReu/albert-for-math-ar-base-ft | 2ddcf162ed80099933edeb0f41bf653f1010900a | 2022-05-30T13:27:05.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | AnReu | null | AnReu/albert-for-math-ar-base-ft | 55 | 1 | transformers | 5,825 | # ALBERT for Math AR
This model is further pre-trained on the Mathematics StackExchange questions and answers. It is based on Albert base v2 and uses the same tokenizer. In addition to pre-training the model was finetuned on Math Question Answer Retrieval. The sequence classification head is trained to output a relevance score if you input the question as the first segment and the answer as the second segment. You can use the relevance score to rank different answers for retrieval.
## Usage
```python
# based on https://huggingface.co/docs/transformers/main/en/task_summary#sequence-classification
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
model = AutoModelForSequenceClassification.from_pretrained("AnReu/albert-for-math-ar-base-ft")
classes = ["non relevant", "relevant"]
sequence_0 = "How can I calculate x in $3x = 5$"
sequence_1 = "Just divide by 3: $x = \\frac{5}{3}$"
sequence_2 = "The general rule for squaring a sum is $(a+b)^2=a^2+2ab+b^2$"
# The tokenizer will automatically add any model specific separators (i.e. <CLS> and <SEP>) and tokens to
# the sequence, as well as compute the attention masks.
irrelevant = tokenizer(sequence_0, sequence_2, return_tensors="pt")
relevant = tokenizer(sequence_0, sequence_1, return_tensors="pt")
irrelevant_classification_logits = model(**irrelevant).logits
relevant_classification_logits = model(**relevant).logits
irrelevant_results = torch.softmax(irrelevant_classification_logits, dim=1).tolist()[0]
relevant_results = torch.softmax(relevant_classification_logits, dim=1).tolist()[0]
# Should be irrelevant
for i in range(len(classes)):
print(f"{classes[i]}: {int(round(irrelevant_results[i] * 100))}%")
# Should be relevant
for i in range(len(classes)):
print(f"{classes[i]}: {int(round(relevant_results[i] * 100))}%")
```
## Reference
If you use this model, please consider referencing our paper:
```bibtex
@inproceedings{reusch2021tu_dbs,
title={TU\_DBS in the ARQMath Lab 2021, CLEF},
author={Reusch, Anja and Thiele, Maik and Lehner, Wolfgang},
year={2021},
organization={CLEF}
}
```
|
Graphcore/deberta-base-squad | f87e807b8a577d2270785935fb4c2285d9492dc3 | 2022-06-29T14:22:22.000Z | [
"pytorch",
"tensorboard",
"deberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Graphcore | null | Graphcore/deberta-base-squad | 55 | 1 | transformers | 5,826 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: deberta-base-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-squad
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 1984
- distributed_type: IPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.25
- num_epochs: 2.0
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cpu
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
Alethea/GPT2-chitchat | b8ad66d37acafef9c52597c96684fd9086ad1125 | 2022-04-07T09:44:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:apache-2.0"
] | conversational | false | Alethea | null | Alethea/GPT2-chitchat | 55 | null | transformers | 5,827 | ---
tags:
- conversational
license: apache-2.0
---
|
hackathon-pln-es/twitter_sexismo-finetuned-robertuito-exist2021 | a2eaa05e4fd69844d93b257139dba7ff591c0eba | 2022-04-09T11:33:51.000Z | [
"pytorch",
"roberta",
"text-classification",
"dataset:EXIST Dataset",
"transformers",
"license:apache-2.0",
"model-index"
] | text-classification | false | hackathon-pln-es | null | hackathon-pln-es/twitter_sexismo-finetuned-robertuito-exist2021 | 55 | null | transformers | 5,828 | ---
license: apache-2.0
tags:
-
datasets:
- EXIST Dataset
widget:
- text: "manejas muy bien para ser mujer"
- text: "En temas políticos hombres y mujeres son iguales"
- text: "Los ipad son unos equipos electrónicos"
metrics:
- accuracy
model-index:
- name: twitter_sexismo-finetuned-exist2021
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: EXIST Dataset
type: EXIST Dataset
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.80
- name: F2
type: F-measure
value: 0.89
---
# twitter_sexismo-finetuned-exist2021
This model is a fine-tuned version of [pysentimiento/robertuito-base-uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) on the EXIST dataset
It achieves the following results on the evaluation set:
- Loss: 0.47
- Accuracy: 0.80
- F1: 0.83
- F2: 0.89
## Model description
Model for the 'Somos NLP' Hackathon for detecting sexism in twitters in Spanish. Created by:
- **medardodt**
- **MariaIsabel**
- **ManRo**
- **lucel172**
- **robertou2**
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The model has been trained to get the best F2 score.The F-measure is calculated as the harmonic mean of precision and recall, giving each the same weighting. It allows a model to be evaluated taking both the precision and recall into account using a single score, which is helpful when describing the performance of the model and in comparing models. The Fbeta-measure is a generalization of the F-measure that adds a configuration parameter called beta. A default beta value is 1.0, which is the same as the F-measure. A smaller beta value, such as 0.5, gives more weight to precision and less to recall, whereas a larger beta value, such as 2.0, gives less weight to precision and more weight to recall in the calculation of the score. It is a useful metric to use when both precision and recall are important but slightly more attention is needed on one or the other, such as when false negatives are more important than false positives, or the reverse.F2-measure puts more attention on minimizing false negatives. We want to detect the sexist comments.
### Training hyperparameters
The following hyperparameters were used during training:
- my_learning_rate = 5E-5
- my_adam_epsilon = 1E-8
- my_number_of_epochs = 8
- my_warmup = 3
- my_mini_batch_size = 32
- optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
Epoch T. Loss V. Loss Accuracy F1 Precision Recall F2
1 0.478700 0.443148 0.804386 0.830160 0.750689 0.928450 0.886467
2 0.298000 0.460549 0.823684 0.841107 0.784661 0.906303 0.879048
3 0.063600 0.706177 0.817544 0.829508 0.799368 0.862010 0.848708
4 0.078700 1.060862 0.816667 0.836078 0.774709 0.908007 0.877800
5 0.005900 1.069239 0.808772 0.821604 0.790551 0.855196 0.841435
6 0.008300 1.184729 0.808772 0.821604 0.790551 0.855196 0.841435
7 0.001400 1.238865 0.816667 0.829388 0.796238 0.865417 0.850636
8 0.000100 1.267197 0.815789 0.827303 0.799682 0.856899 0.844810
9 0.000100 1.267815 0.808772 0.818937 0.799028 0.839864 0.831366
10 0.000300 1.275827 0.807895 0.818257 0.797735 0.839864 0.831086
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.6
## Model in Action
Fast usage with pipelines:
``` python
###libraries required
!pip install transformers
from transformers import pipeline
### usage pipelines
model_checkpoint = "robertou2/twitter_sexismo-finetuned-robertuito-exist2021"
pipeline_nlp = pipeline("text-classification", model=model_checkpoint)
pipeline_nlp("mujer al volante peligro!")
#pipeline_nlp("¡me encanta el ipad!")
#pipeline_nlp (["mujer al volante peligro!", "Los hombre tienen más manias que las mujeres", "me encanta el ipad!"] )
# OUTPUT MODEL #
# LABEL_0: "NON SEXISM"or LABEL_1: "SEXISM" and score: probability of accuracy per model.
# [{'label': 'LABEL_1', 'score': 0.9967633485794067}]
# [{'label': 'LABEL_0', 'score': 0.9934417009353638}]
#[{‘label': 'LABEL_1', 'score': 0.9967633485794067},
# {'label': 'LABEL_1', 'score': 0.9755664467811584},
# {'label': 'LABEL_0', 'score': 0.9955045580863953}]
```
## More Information Process
### Retos
Uno de los principales retos que se encontró en este proceso ha sido disponer de un dataset en español. Se ha logrado conseguir (previa solicitud) el dataset utilizado en [EXIST:sEXism Identification in Social neTworks](http://nlp.uned.es/exist2021/), el cual fue un gran punto de partida para comenzar con el modelo. Lamentablemente este un dataset presenta limitaciones debido a licencias y políticas para ser compartido libremente.
Este dataset incorpora cualquier tipo de expresión sexista o fenómenos relacionados, incluidas las afirmaciones descriptivas o informadas donde el mensaje sexista es un informe o una descripción de un comportamiento sexista. se han utilizado los 3,541 tweets etiquetados en español. Luego se logró disponer de otro dataset en español [MeTwo: Machismo and Sexism Twitter Identification dataset](https://github.com/franciscorodriguez92/MeTwo). Este dataset contiene los id de cada tweet con su etiqueta respectiva, lo que nos permitió obtener el texto del tweet e incrementar el dataset original.
Un desafío ha sido iniciar los procesos de finetuned en las prueba, esto pues se dispone de diversas variables para validar y testear (desde modelos como: BETO o Roberta, hasta hiperparámetros: como learning rate), y solo se disponede un plazo acotado de dos semanas, además de la curva de aprendizaje. Para este desafío, se han basado las primeras pruebas en los parámetros presentados por de Paula et al. (2021), lo cual brindó un punto de partida y un reto a vencer, el **_0.790 de accuracy_** obtenidos por el trabajo previo en la identificación de tweets sexistas en español.
En este ámbito se realizaron diversas pruebas en paralelo para encontrar el mejor modelo. Luego de un proceso colaborativo de finetuned se ha logrado obtener un **83% de accuracy**.
### Trabajos Futuros
Se propone incrementar el dataset desarrollado. Para esto es posible descargar cantidades superiores de tweets en español y aplicar técnicas de active learning para obtener un grupo reducido de tweets a etiquetar vía crowdsourcing, y en donde estos datos etiquetados puedan servir para etiquetar el resto. También se pueden utilizar técnicas de Data Augmentation, para duplicar y extender el dataset. Realizar más pruebas con otros modelos y mejorar el modelo es otro reto que se propone como trabajos futuros.
### Posibles Aplicaciones
Primero es sumamente importante dar mayor visibilidad al problema de _sexismo en redes sociales_, principalmente en español. El proceso de Transfer Learning logra reutilizar y aprovechar modelos previamente entrenados, y lo que se desea es que nuevos grupos de investigación, estudiantes, etc. utilicen la base del actual modelo para desarrollar los propios y crear un mejor modelo. De esta manera, se podría construir una herramienta que pueda identificar en tiempo real los tweets sexistas y eliminarlos antes de su propagación.
### Referencias
1 de Paula, A. F. M., da Silva, R. F., & Schlicht, I. B. (2021). Sexism Prediction in Spanish and English Tweets Using Monolingual and Multilingual BERT and Ensemble Models. arXiv preprint arXiv:2111.04551.
Rodríguez-Sánchez, F., Carrillo-de-Albornoz, J., Plaza, L., Gonzalo, J., Rosso, P., Comet, M., & Donoso, T. (2021). Overview of exist 2021: sexism identification in social networks. Procesamiento del Lenguaje Natural, 67, 195-207. |
castorini/monot5-small-msmarco-10k | 77f8e3f7b1eb1afe353aa21a7c3a2fc8feca702e | 2022-05-25T15:08:05.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | castorini | null | castorini/monot5-small-msmarco-10k | 55 | null | transformers | 5,829 | This model is a T5-small reranker fine-tuned on the MS MARCO passage dataset for 10k steps (or 1 epoch).
For more details on how to use it, check the following links:
- [A simple reranking example](https://github.com/castorini/pygaggle#a-simple-reranking-example)
- [Rerank MS MARCO passages](https://github.com/castorini/pygaggle/blob/master/docs/experiments-msmarco-passage-subset.md)
- [Rerank Robust04 documents](https://github.com/castorini/pygaggle/blob/master/docs/experiments-robust04-monot5-gpu.md)
Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
|
ClassCat/gpt2-base-japanese-v2 | 52e719968dcd858c9be4b21975bf0778fb8fd828 | 2022-06-25T15:36:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"transformers",
"license:cc-by-sa-4.0"
] | text-generation | false | ClassCat | null | ClassCat/gpt2-base-japanese-v2 | 55 | 1 | transformers | 5,830 | ---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
widget:
- text: 天気予報によれば明日は
- text: 私の今日の昼飯は
- text: サッカー日本代表はベルギーに
- text: 日本人サッカー選手が W 杯で
---
## GPT2 Japanese base model version 2
### Prerequisites
transformers==4.19.2
### Model architecture
This model uses GPT2 base setttings except vocabulary size.
### Tokenizer
Using BPE tokenizer with vocabulary size 60,000.
### Training Data
* [wiki40b/ja](https://www.tensorflow.org/datasets/catalog/wiki40b#wiki40bja) (Japanese Wikipedia)
* Subset of [CC-100/ja](https://data.statmt.org/cc-100/) : Monolingual Datasets from Web Crawl Data
### Usage
```python
from transformers import pipeline
generator = pipeline('text-generation', model='ClassCat/gpt2-base-japanese-v2')
generator("今度の連休の天気は", max_length=50, num_return_sequences=5)
```
## (Japanese description) GPT2 日本語 ベースモデル・バージョン 2
### 前提条件
transformers==4.19.2
### モデル・アーキテクチャ
このモデルは GPT2 ベースモデルの設定を (語彙サイズ以外は) 使用しています。
### トークナイザー
語彙サイズ 60,000 の BPE トークナイザーを使用しています。
### 訓練データ
* [wiki40b/ja](https://www.tensorflow.org/datasets/catalog/wiki40b#wiki40bja) (日本語 Wikipedia)
* [CC-100/ja](https://data.statmt.org/cc-100/) のサブセット : Web クロールデータからの単一言語データセット。
### 使用方法
```python
from transformers import pipeline
generator = pipeline('text-generation', model='ClassCat/gpt2-base-japanese-v2')
generator("今度の連休の天気は", max_length=50, num_return_sequences=5)
```
|
saitishmukhametov/ruGTP2-P | 852755dbce62d6346c2a8446e41c31d0f88c0ac5 | 2022-06-09T05:58:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | saitishmukhametov | null | saitishmukhametov/ruGTP2-P | 55 | null | transformers | 5,831 | |
nmcahill/mbti-classifier | 546c5da706171619361cb174ad4e34ab4c7aeb33 | 2022-06-29T20:08:34.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"license:afl-3.0"
] | text-classification | false | nmcahill | null | nmcahill/mbti-classifier | 55 | null | transformers | 5,832 | ---
license: afl-3.0
---
|
mvonwyl/distilbert-base-uncased-imdb | e78f2fa182bace5db1195f3672ebd502c9d35157 | 2022-06-25T17:45:40.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | mvonwyl | null | mvonwyl/distilbert-base-uncased-imdb | 55 | null | transformers | 5,833 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9214
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an imdb dataset where an evaluation of 5000 samples was created by splitting the training set.
It achieves the following results on the evaluation set:
- Loss: 0.6252
- Accuracy: 0.9214
## Model description
More information needed
## Intended uses & limitations
This model was trained for the introduction to Natural language processing course of [EPITA](https://www.epita.fr/).
## Training and evaluation data
The training/evaluation split was generated using a `seed` of 42 and a `test_size` of 0.2.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2875 | 1.0 | 625 | 0.2286 | 0.9102 |
| 0.1685 | 2.0 | 1250 | 0.2416 | 0.9128 |
| 0.1171 | 3.0 | 1875 | 0.3223 | 0.917 |
| 0.0493 | 4.0 | 2500 | 0.3667 | 0.9162 |
| 0.023 | 5.0 | 3125 | 0.4074 | 0.92 |
| 0.015 | 6.0 | 3750 | 0.4291 | 0.9236 |
| 0.0129 | 7.0 | 4375 | 0.5452 | 0.9194 |
| 0.0051 | 8.0 | 5000 | 0.5886 | 0.9146 |
| 0.0027 | 9.0 | 5625 | 0.6310 | 0.9186 |
| 0.002 | 10.0 | 6250 | 0.6252 | 0.9214 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Jeevesh8/goog_bert_ft_cola-3 | 025545ca908d8ba581dda1f11fc2d7f411301962 | 2022-06-29T17:31:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-3 | 55 | null | transformers | 5,834 | Entry not found |
Evelyn18/distilbert-base-uncased-becasv2-2 | d89961c38b2debfb58fb4d1805c438419ed6e399 | 2022-07-07T03:47:53.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:becasv2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Evelyn18 | null | Evelyn18/distilbert-base-uncased-becasv2-2 | 55 | null | transformers | 5,835 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: distilbert-base-uncased-becasv2-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-becasv2-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 9 | 4.8334 |
| No log | 2.0 | 18 | 3.9395 |
| No log | 3.0 | 27 | 3.4886 |
| No log | 4.0 | 36 | 3.2190 |
| No log | 5.0 | 45 | 3.0781 |
| No log | 6.0 | 54 | 2.9878 |
| No log | 7.0 | 63 | 2.9336 |
| No log | 8.0 | 72 | 2.9170 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DL4NLP-Group11/xtremedistil-l6-h256-uncased-squad | 451c05e65c7b1bd2de9e1b6523ecd1c34cb795bc | 2022-07-14T20:08:11.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad",
"transformers",
"autotrain_compatible"
] | question-answering | false | DL4NLP-Group11 | null | DL4NLP-Group11/xtremedistil-l6-h256-uncased-squad | 55 | null | transformers | 5,836 | ---
language: en
datasets:
- squad
metrics:
- squad
widget:
- text: "Who is the best girl in NieR:Automata?"
context: "2B is a fictional character from the game NieR: Automata. She is considered by many to be best girl of the series, perhaps due to her appealing design (wearing quite a provoking outfit) and to the great character development she experiences throughout the game. Her thighs may also play a role in her immense popularity (at least for some fans). Alongside 9S, she is an android from YoRHa, a military force that fights against aliens and their machine lifeforms."
---
# xtremedistil-l6-h256-uncased fine-tuned on SQuAD
This model was developed as part of a project for the Deep Learning for NLP (DL4NLP) lecture at Technische Universität Darmstadt (2022). It uses [xtremedistil-l6-h256-uncased]( https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) as a base model and was fine-tuned on the [SQuAD dataset](https://huggingface.co/datasets/squad) for Question Answering. It makes no distinction between uppercase and lowercase words.
## Dataset
As mentioned previously, the SQuAD dataset used to train and evaluate the model. It was downloaded from [GitHub](https://github.com/mrqa/MRQA-Shared-Task-2019) and is divided into the following splits.
| Split | Number of examples |
| ---------- | ------------------ |
| Training | 86 588 |
| Evaluation | 10 507 |
The following script was used to download, prepare and load the dataset so that it could be appropriately used by the model. Although it was not directly downloaded from Hugging Face, the dataset was formatted the exactly same way as the version available on Hugging Face.
```python
dataset_directory = 'dataset'
train_file = 'train.json'
dev_file = 'dev.json'
if not os.path.exists(dataset_directory):
print('Creating dataset directory\n')
os.makedirs(dataset_directory)
# download train and dev splits from the dataset
!wget https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/SQuAD.jsonl.gz -O dataset/train.jsonl.gz
!wget https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/SQuAD.jsonl.gz -O dataset/dev.jsonl.gz
# unpack the files
!gzip -d dataset/train.jsonl.gz
!gzip -d dataset/dev.jsonl.gz
def prepare_data(dir, file_name):
data = []
with open(f'{dir}/{file_name}l', 'r') as f:
# skip header
next(f)
for line in f:
entry = json.loads(line)
for qas in entry['qas']:
answer_start = []
for answer in qas['detected_answers']:
answer_start.append(answer['char_spans'][0][0])
data.append({
'id': qas['id'],
'context': entry['context'],
'question': qas['question'],
'answers': {
'text': qas['answers'],
'answer_start': answer_start
}
})
with open(f'{dir}/{file_name}', 'w') as f:
for entry in data:
json.dump(entry, f)
f.write('\n')
os.remove(f'{dir}/{file_name}l')
prepare_data(dataset_directory, train_file)
prepare_data(dataset_directory, dev_file)
data_files = {'train': train_file, 'validation': dev_file}
dataset = load_dataset(dataset_directory, data_files=data_files)
```
## Hyperparameters
The hyperparameters utilized to fine-tune the model are listed below.
- epochs: 2
- train_batch_size: 16
- eval_batch_size: 32
- optimizer: adamW
- lr: 5e-5
- weight_decay: 0.01
- lr_scheduler: linear
- num_warmup_steps: 0
- max_length: 512
## Fine-Tuning and Evaluation
Most of the code used to pre-process the dataset, define a training loop and post-process the predictions generated by the model was adapated from the [Question Answering course](https://huggingface.co/course/chapter7/7) from Hugging Face.
The model was fine-tuned using GPU acceleration on Google Colab. The entire training and evaluation process took approximately 1h10min. More specifically, for each epoch, the training step was completed in 17-18 minutes, while the evaluation lasted for about 16-18 minutes.
After fine-tuning, the following results were achieved on the evaluation set (using the [squad metric](https://huggingface.co/spaces/evaluate-metric/squad)):
| Metric | Value |
| ---------------- | ----------------- |
| Exact Match (EM) | 61.91110688112687 |
| F1-Score | 77.2232806051733 |
|
semy/hf-model-full-0 | e1511ab321f6d7d30b5fd88a727b72e825208cf6 | 2022-07-22T07:02:08.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | semy | null | semy/hf-model-full-0 | 55 | null | transformers | 5,837 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: hf-model-full-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hf-model-full-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4295
- Accuracy: 0.802
- F1: 0.802
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:|
| 0.9446 | 1.0 | 563 | 0.4208 | 0.793 | 0.793 |
| 0.1259 | 2.0 | 1126 | 0.4295 | 0.802 | 0.802 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Ian-AI/EalAIn | c5e1c0e027735f3136a43be06c63dddefbc1501e | 2022-07-21T16:47:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:afl-3.0"
] | text-generation | false | Ian-AI | null | Ian-AI/EalAIn | 55 | null | transformers | 5,838 | ---
license: afl-3.0
---
EalAIn is a DialoGPT model loosely based on "Janet" from the TV Series "The Good Place". This particular instance of Janet is responds to the name "Ealain" and has some knowledge about art. It will, at times, promote me as an AI artist. |
FinanceInc/auditor_sentiment_finetuned | 928330ed06b005d85486e392015aab22710b5a40 | 2022-07-22T21:05:05.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:rajistics/autotrain-data-auditor-sentiment",
"dataset:FinanceInc/auditor_sentiment",
"transformers",
"autotrain",
"DEV",
"model-index",
"co2_eq_emissions"
] | text-classification | false | FinanceInc | null | FinanceInc/auditor_sentiment_finetuned | 55 | null | transformers | 5,839 | ---
language: en
tags:
- autotrain
- DEV
widget:
- text: "Operating profit jumped to EUR 47 million from EUR 6.6 million"
datasets:
- rajistics/autotrain-data-auditor-sentiment
- FinanceInc/auditor_sentiment
co2_eq_emissions: 3.165771608457648
model-index:
- name: auditor_sentiment_finetuned
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: FinanceInc/auditor_sentiment
type: glue
split: validation
metrics:
- name: Accuracy
type: accuracy
value: 0.862
verified: true
- name: F1
type: f1
value: 0.845
verified: true
- name: Recall
type: recall
value: 0.846
verified: true
- name: Precision
type: precision
value: 0.844
verified: true
- task:
type: text-classification
name: Text Classification
dataset:
name: FinanceInc/auditor_sentiment_2021
type: glue
split: validation
metrics:
- name: Accuracy
type: accuracy
value: 0.848937
verified: true
- name: F1
type: f1
value: 0.848282
verified: true
- name: Recall
type: recall
value: 0.808937
verified: true
- name: Precision
type: precision
value: 0.818542
verified: true
---
# Auditor Review Sentiment Model
This model has been finetuned from the proprietary version of [FinBERT](https://huggingface.co/FinanceInc/finbert-pretrain) trained internally using demo.org proprietary dataset of auditor evaluation of sentiment.
FinBERT is a BERT model pre-trained on a large corpora of financial texts. The purpose is to enhance financial NLP research and practice in the financial domain, hoping that financial practitioners and researchers can benefit from this model without the necessity of the significant computational resources required to train the model.
# Training Data
This model was fine-tuned using [Autotrain](https://ui.autotrain.huggingface.co/11671/metrics) from the demo-org/auditor_review review dataset.
# Model Status
This model is currently being evaluated in development until the end of the quarter. Based on the results, it may be elevated to production.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: [1167143226](https://huggingface.co/rajistics/autotrain-auditor-sentiment-1167143226)
- CO2 Emissions (in grams): 3.165771608457648
## Validation Metrics
- Loss: 0.3418470025062561
- Accuracy: 0.8617131062951496
- Macro F1: 0.8448284352912685
- Micro F1: 0.8617131062951496
- Weighted F1: 0.8612696670395574
- Macro Precision: 0.8440532616584138
- Micro Precision: 0.8617131062951496
- Weighted Precision: 0.8612762332366959
- Macro Recall: 0.8461980005490884
- Micro Recall: 0.8617131062951496
- Weighted Recall: 0.8617131062951496
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/rajistics/autotrain-auditor-sentiment-1167143226
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("rajistics/autotrain-auditor-sentiment-1167143226", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("rajistics/autotrain-auditor-sentiment-1167143226", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
sguskin/minilmv2-L6-H384-squad1.1 | b66ba827e1e82f5f51df48b3c31be992d21b68ff | 2022-07-28T07:28:50.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | sguskin | null | sguskin/minilmv2-L6-H384-squad1.1 | 55 | null | transformers | 5,840 | Entry not found |
BenDavis71/GPT-2-Finetuning-AIRaid | 61b9ea3097434a2536b3a9a0feecde55f040295b | 2021-05-21T09:29:22.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BenDavis71 | null | BenDavis71/GPT-2-Finetuning-AIRaid | 54 | null | transformers | 5,841 | Entry not found |
Contrastive-Tension/BERT-Base-CT-STSb | 0ca67e247c982bf1b070f72a6e1e30705a7204f7 | 2021-05-18T17:48:15.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | Contrastive-Tension | null | Contrastive-Tension/BERT-Base-CT-STSb | 54 | null | transformers | 5,842 | Entry not found |
Geotrend/distilbert-base-fr-cased | 5430ab9a994b5516c5d6021d4972a29bbbd9c31c | 2021-08-16T13:21:13.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"fr",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-fr-cased | 54 | 1 | transformers | 5,843 | ---
language: fr
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-fr-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-fr-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-fr-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Helsinki-NLP/opus-mt-afa-en | 90ea9ea3351bd6eb88f7d7fc3289823cdb8a3abe | 2021-01-18T07:46:45.000Z | [
"pytorch",
"marian",
"text2text-generation",
"so",
"ti",
"am",
"he",
"mt",
"ar",
"afa",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-afa-en | 54 | null | transformers | 5,844 | ---
language:
- so
- ti
- am
- he
- mt
- ar
- afa
- en
tags:
- translation
license: apache-2.0
---
### afa-eng
* source group: Afro-Asiatic languages
* target group: English
* OPUS readme: [afa-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-eng/README.md)
* model: transformer
* source language(s): acm afb amh apc ara arq ary arz hau_Latn heb kab mlt rif_Latn shy_Latn som tir
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.amh-eng.amh.eng | 35.9 | 0.550 |
| Tatoeba-test.ara-eng.ara.eng | 36.6 | 0.543 |
| Tatoeba-test.hau-eng.hau.eng | 11.9 | 0.327 |
| Tatoeba-test.heb-eng.heb.eng | 42.7 | 0.591 |
| Tatoeba-test.kab-eng.kab.eng | 4.3 | 0.213 |
| Tatoeba-test.mlt-eng.mlt.eng | 44.3 | 0.618 |
| Tatoeba-test.multi.eng | 27.1 | 0.464 |
| Tatoeba-test.rif-eng.rif.eng | 3.5 | 0.141 |
| Tatoeba-test.shy-eng.shy.eng | 0.6 | 0.125 |
| Tatoeba-test.som-eng.som.eng | 23.6 | 0.472 |
| Tatoeba-test.tir-eng.tir.eng | 13.1 | 0.328 |
### System Info:
- hf_name: afa-eng
- source_languages: afa
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['so', 'ti', 'am', 'he', 'mt', 'ar', 'afa', 'en']
- src_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.test.txt
- src_alpha3: afa
- tgt_alpha3: eng
- short_pair: afa-en
- chrF2_score: 0.46399999999999997
- bleu: 27.1
- brevity_penalty: 1.0
- ref_len: 69373.0
- src_name: Afro-Asiatic languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: afa
- tgt_alpha2: en
- prefer_old: False
- long_pair: afa-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-iir | c7cf41bec850d260be15e942fbb94ff6ae1f7641 | 2021-01-18T08:09:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"bn",
"or",
"gu",
"mr",
"ur",
"hi",
"ps",
"os",
"as",
"si",
"iir",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-iir | 54 | null | transformers | 5,845 | ---
language:
- en
- bn
- or
- gu
- mr
- ur
- hi
- ps
- os
- as
- si
- iir
tags:
- translation
license: apache-2.0
---
### eng-iir
* source group: English
* target group: Indo-Iranian languages
* OPUS readme: [eng-iir](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-iir/README.md)
* model: transformer
* source language(s): eng
* target language(s): asm awa ben bho gom guj hif_Latn hin jdt_Cyrl kur_Arab kur_Latn mai mar npi ori oss pan_Guru pes pes_Latn pes_Thaa pnb pus rom san_Deva sin snd_Arab tgk_Cyrl tly_Latn urd zza
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014-enghin.eng.hin | 6.7 | 0.326 |
| newsdev2019-engu-engguj.eng.guj | 6.0 | 0.283 |
| newstest2014-hien-enghin.eng.hin | 10.4 | 0.353 |
| newstest2019-engu-engguj.eng.guj | 6.6 | 0.282 |
| Tatoeba-test.eng-asm.eng.asm | 2.7 | 0.249 |
| Tatoeba-test.eng-awa.eng.awa | 0.4 | 0.122 |
| Tatoeba-test.eng-ben.eng.ben | 15.3 | 0.459 |
| Tatoeba-test.eng-bho.eng.bho | 3.7 | 0.161 |
| Tatoeba-test.eng-fas.eng.fas | 3.4 | 0.227 |
| Tatoeba-test.eng-guj.eng.guj | 18.5 | 0.365 |
| Tatoeba-test.eng-hif.eng.hif | 1.0 | 0.064 |
| Tatoeba-test.eng-hin.eng.hin | 17.0 | 0.461 |
| Tatoeba-test.eng-jdt.eng.jdt | 3.9 | 0.122 |
| Tatoeba-test.eng-kok.eng.kok | 5.5 | 0.059 |
| Tatoeba-test.eng-kur.eng.kur | 4.0 | 0.125 |
| Tatoeba-test.eng-lah.eng.lah | 0.3 | 0.008 |
| Tatoeba-test.eng-mai.eng.mai | 9.3 | 0.445 |
| Tatoeba-test.eng-mar.eng.mar | 20.7 | 0.473 |
| Tatoeba-test.eng.multi | 13.7 | 0.392 |
| Tatoeba-test.eng-nep.eng.nep | 0.6 | 0.060 |
| Tatoeba-test.eng-ori.eng.ori | 2.4 | 0.193 |
| Tatoeba-test.eng-oss.eng.oss | 2.1 | 0.174 |
| Tatoeba-test.eng-pan.eng.pan | 9.7 | 0.355 |
| Tatoeba-test.eng-pus.eng.pus | 1.0 | 0.126 |
| Tatoeba-test.eng-rom.eng.rom | 1.3 | 0.230 |
| Tatoeba-test.eng-san.eng.san | 1.3 | 0.101 |
| Tatoeba-test.eng-sin.eng.sin | 11.7 | 0.384 |
| Tatoeba-test.eng-snd.eng.snd | 2.8 | 0.180 |
| Tatoeba-test.eng-tgk.eng.tgk | 8.1 | 0.353 |
| Tatoeba-test.eng-tly.eng.tly | 0.5 | 0.015 |
| Tatoeba-test.eng-urd.eng.urd | 12.3 | 0.409 |
| Tatoeba-test.eng-zza.eng.zza | 0.5 | 0.025 |
### System Info:
- hf_name: eng-iir
- source_languages: eng
- target_languages: iir
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-iir/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir']
- src_constituents: {'eng'}
- tgt_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur_Arab', 'tgk_Cyrl', 'hin', 'kur_Latn', 'pes_Thaa', 'pus', 'san_Deva', 'oss', 'tly_Latn', 'jdt_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes_Latn', 'awa', 'sin'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: iir
- short_pair: en-iir
- chrF2_score: 0.392
- bleu: 13.7
- brevity_penalty: 1.0
- ref_len: 63351.0
- src_name: English
- tgt_name: Indo-Iranian languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: iir
- prefer_old: False
- long_pair: eng-iir
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en_el_es_fi-en_el_es_fi | 8d915c98083ed0dc9f8dfdd2e465f18936fff802 | 2021-09-09T21:40:45.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"el",
"es",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en_el_es_fi-en_el_es_fi | 54 | 1 | transformers | 5,846 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en_el_es_fi-en_el_es_fi
* source languages: en,el,es,fi
* target languages: en,el,es,fi
* OPUS readme: [en+el+es+fi-en+el+es+fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en+el+es+fi-en+el+es+fi/README.md)
* dataset: opus
* model: transformer
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-03-02.zip](https://object.pouta.csc.fi/OPUS-MT-models/en+el+es+fi-en+el+es+fi/opus-2020-03-02.zip)
* test set translations: [opus-2020-03-02.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en+el+es+fi-en+el+es+fi/opus-2020-03-02.test.txt)
* test set scores: [opus-2020-03-02.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en+el+es+fi-en+el+es+fi/opus-2020-03-02.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2015-enfi.en.fi | 16.0 | 0.498 |
| newssyscomb2009.en.es | 29.9 | 0.570 |
| newssyscomb2009.es.en | 29.7 | 0.569 |
| news-test2008.en.es | 27.3 | 0.549 |
| news-test2008.es.en | 27.3 | 0.548 |
| newstest2009.en.es | 28.4 | 0.564 |
| newstest2009.es.en | 28.4 | 0.564 |
| newstest2010.en.es | 34.0 | 0.599 |
| newstest2010.es.en | 34.0 | 0.599 |
| newstest2011.en.es | 35.1 | 0.600 |
| newstest2012.en.es | 35.4 | 0.602 |
| newstest2013.en.es | 31.9 | 0.576 |
| newstest2015-enfi.en.fi | 17.8 | 0.509 |
| newstest2016-enfi.en.fi | 19.0 | 0.521 |
| newstest2017-enfi.en.fi | 21.2 | 0.539 |
| newstest2018-enfi.en.fi | 13.9 | 0.478 |
| newstest2019-enfi.en.fi | 18.8 | 0.503 |
| newstestB2016-enfi.en.fi | 14.9 | 0.491 |
| newstestB2017-enfi.en.fi | 16.9 | 0.503 |
| simplification.en.en | 63.0 | 0.798 |
| Tatoeba.en.fi | 56.7 | 0.719 |
|
Helsinki-NLP/opus-mt-om-en | 08917ded8bae79f2c459b4be133ec12a8f497315 | 2021-09-10T14:00:02.000Z | [
"pytorch",
"marian",
"text2text-generation",
"om",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-om-en | 54 | null | transformers | 5,847 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-om-en
* source languages: om
* target languages: en
* OPUS readme: [om-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/om-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/om-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/om-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/om-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.om.en | 27.3 | 0.448 |
|
Helsinki-NLP/opus-mt-sem-sem | 3bc83774049409237ecbdc84779c50d7016b1674 | 2020-08-21T14:42:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"mt",
"ar",
"he",
"ti",
"am",
"sem",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sem-sem | 54 | null | transformers | 5,848 | ---
language:
- mt
- ar
- he
- ti
- am
- sem
tags:
- translation
license: apache-2.0
---
### sem-sem
* source group: Semitic languages
* target group: Semitic languages
* OPUS readme: [sem-sem](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sem-sem/README.md)
* model: transformer
* source language(s): apc ara arq arz heb mlt
* target language(s): apc ara arq arz heb mlt
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara-ara.ara.ara | 4.2 | 0.200 |
| Tatoeba-test.ara-heb.ara.heb | 34.0 | 0.542 |
| Tatoeba-test.ara-mlt.ara.mlt | 16.6 | 0.513 |
| Tatoeba-test.heb-ara.heb.ara | 18.8 | 0.477 |
| Tatoeba-test.mlt-ara.mlt.ara | 20.7 | 0.388 |
| Tatoeba-test.multi.multi | 27.1 | 0.507 |
### System Info:
- hf_name: sem-sem
- source_languages: sem
- target_languages: sem
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sem-sem/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['mt', 'ar', 'he', 'ti', 'am', 'sem']
- src_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}
- tgt_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.test.txt
- src_alpha3: sem
- tgt_alpha3: sem
- short_pair: sem-sem
- chrF2_score: 0.507
- bleu: 27.1
- brevity_penalty: 0.972
- ref_len: 13472.0
- src_name: Semitic languages
- tgt_name: Semitic languages
- train_date: 2020-07-27
- src_alpha2: sem
- tgt_alpha2: sem
- prefer_old: False
- long_pair: sem-sem
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
NYTK/sentiment-hts2-hubert-hungarian | 7afdc64bc3fd66174b5ebb068e1fc0870b914903 | 2022-01-26T13:20:21.000Z | [
"pytorch",
"bert",
"text-classification",
"hu",
"transformers",
"license:gpl"
] | text-classification | false | NYTK | null | NYTK/sentiment-hts2-hubert-hungarian | 54 | null | transformers | 5,849 | ---
language:
- hu
tags:
- text-classification
license: gpl
metrics:
- accuracy
widget:
- text: "Jó reggelt! majd küldöm az élményhozókat :)."
---
# Hungarian Sentence-level Sentiment Analysis model with huBERT
For further models, scripts and details, see [our repository](https://github.com/nytud/sentiment-analysis) or [our demo site](https://juniper.nytud.hu/demo/nlp).
- Pretrained model used: huBERT
- Finetuned on Hungarian Twitter Sentiment (HTS) Corpus
- Labels: 1, 2
## Limitations
- max_seq_length = 128
## Results
| Model | HTS2 | HTS5 |
| ------------- | ------------- | ------------- |
| huBERT | **85.55** | 68.99 |
| XLM-RoBERTa| 85.56 | 85.56 |
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-bart,
title = {Improving Performance of Sentence-level Sentiment Analysis with Data Augmentation Methods},
booktitle = {Proceedings of 12th IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2021)},
year = {2021},
publisher = {IEEE},
address = {Online},
author = {{Laki, László and Yang, Zijian Győző}}
pages = {417--422}
}
``` |
SetFit/deberta-v3-large__sst2__train-8-2 | 0843471896f06585114d90510408ce32e9064962 | 2022-02-10T08:35:53.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-8-2 | 54 | null | transformers | 5,850 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-2
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6794
- Accuracy: 0.6063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6942 | 1.0 | 3 | 0.7940 | 0.25 |
| 0.6068 | 2.0 | 6 | 0.9326 | 0.25 |
| 0.6553 | 3.0 | 9 | 0.7979 | 0.25 |
| 0.475 | 4.0 | 12 | 0.7775 | 0.25 |
| 0.377 | 5.0 | 15 | 0.7477 | 0.25 |
| 0.3176 | 6.0 | 18 | 0.6856 | 0.75 |
| 0.2708 | 7.0 | 21 | 0.6554 | 0.75 |
| 0.2855 | 8.0 | 24 | 0.8129 | 0.5 |
| 0.148 | 9.0 | 27 | 0.7074 | 0.75 |
| 0.0947 | 10.0 | 30 | 0.7090 | 0.75 |
| 0.049 | 11.0 | 33 | 0.7885 | 0.75 |
| 0.0252 | 12.0 | 36 | 0.9203 | 0.75 |
| 0.0165 | 13.0 | 39 | 1.0937 | 0.75 |
| 0.0084 | 14.0 | 42 | 1.2502 | 0.75 |
| 0.0059 | 15.0 | 45 | 1.3726 | 0.75 |
| 0.0037 | 16.0 | 48 | 1.4784 | 0.75 |
| 0.003 | 17.0 | 51 | 1.5615 | 0.75 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
cankeles/DPTNet_WHAMR_enhsingle_16k | a8366a7337463c149b151a3152cab84bceafb3a6 | 2022-02-17T19:53:06.000Z | [
"pytorch",
"dataset:Libri1Mix",
"dataset:enh_single",
"asteroid",
"audio",
"DPTNet",
"audio-to-audio",
"license:cc-by-sa-4.0"
] | audio-to-audio | false | cankeles | null | cankeles/DPTNet_WHAMR_enhsingle_16k | 54 | 1 | asteroid | 5,851 | ---
tags:
- asteroid
- audio
- DPTNet
- audio-to-audio
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-4.0
---
## Asteroid model `cankeles/DPTNet_WHAMR_enhsignle_16k`
Description:
This model was trained by M. Can Keleş using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
mode: min
nondefault_nsrc: null
sample_rate: 16000
segment: 2.0
task: enh_single
train_dir: wav16k/min/tr/
valid_dir: wav16k/min/cv/
filterbank:
kernel_size: 16
n_filters: 64
stride: 8
main_args:
exp_dir: exp/tmp
help: null
masknet:
bidirectional: true
chunk_size: 100
dropout: 0
ff_activation: relu
ff_hid: 256
hop_size: 50
in_chan: 64
mask_act: sigmoid
n_repeats: 2
n_src: 1
norm_type: gLN
out_chan: 64
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
positional arguments: {}
scheduler:
d_model: 64
steps_per_epoch: 10000
training:
batch_size: 4
early_stop: true
epochs: 60
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On custom min test set :
```yml
'sar': 12.853384266251018,
'sar_imp': 8.950332361953906,
'sdr': 12.853384266251018,
'sdr_imp': 8.950332361953906,
'si_sdr': 12.247012621312548,
'si_sdr_imp': 8.429646186633407,
'sir': inf,
'sir_imp': nan,
'stoi': 0.9022338865380519,
'stoi_imp': 0.09735707619500522
```
|
dbmdz/bert-base-historic-dutch-cased | 0a7074b8d8d9f737286c7e3ab44ced1982f08cd3 | 2021-12-13T16:31:17.000Z | [
"pytorch",
"tf",
"tensorboard",
"bert",
"fill-mask",
"dutch",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | dbmdz | null | dbmdz/bert-base-historic-dutch-cased | 54 | null | transformers | 5,852 | ---
language: dutch
license: mit
widget:
- text: "de [MASK] vau Financien, in hec vorige jaar, da inkomswi"
---
# Language Model for Historic Dutch
In this repository we open source a language model for Historic Dutch, trained on the
[Delpher Corpus](https://www.delpher.nl/over-delpher/delpher-open-krantenarchief/download-teksten-kranten-1618-1879\),
that include digitized texts from Dutch newspapers, ranging from 1618 to 1879.
# Changelog
* 13.12.2021: Initial version of this repository.
# Model Zoo
The following models for Historic Dutch are available on the Hugging Face Model Hub:
| Model identifier | Model Hub link
| -------------------------------------- | -------------------------------------------------------------------
| `dbmdz/bert-base-historic-dutch-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-dutch-cased)
# Stats
The download urls for all archives can be found [here](delpher-corpus.urls).
We then used the awesome `alto-tools` from [this](https://github.com/cneud/alto-tools)
repository to extract plain text. The following table shows the size overview per year range:
| Period | Extracted plain text size
| --------- | -------------------------:
| 1618-1699 | 170MB
| 1700-1709 | 103MB
| 1710-1719 | 65MB
| 1720-1729 | 137MB
| 1730-1739 | 144MB
| 1740-1749 | 188MB
| 1750-1759 | 171MB
| 1760-1769 | 235MB
| 1770-1779 | 271MB
| 1780-1789 | 414MB
| 1790-1799 | 614MB
| 1800-1809 | 734MB
| 1810-1819 | 807MB
| 1820-1829 | 987MB
| 1830-1839 | 1.7GB
| 1840-1849 | 2.2GB
| 1850-1854 | 1.3GB
| 1855-1859 | 1.7GB
| 1860-1864 | 2.0GB
| 1865-1869 | 2.3GB
| 1870-1874 | 1.9GB
| 1875-1876 | 867MB
| 1877-1879 | 1.9GB
The total training corpus consists of 427,181,269 sentences and 3,509,581,683 tokens (counted via `wc`),
resulting in a total corpus size of 21GB.
The following figure shows an overview of the number of chars per year distribution:

# Language Model Pretraining
We use the official [BERT](https://github.com/google-research/bert) implementation using the following command
to train the model:
```bash
python3 run_pretraining.py --input_file gs://delpher-bert/tfrecords/*.tfrecord \
--output_dir gs://delpher-bert/bert-base-historic-dutch-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
We train the model for 3M steps using a total batch size of 128 on a v3-32 TPU. The pretraining loss curve can be seen
in the next figure:

# Evaluation
We evaluate our model on the preprocessed Europeana NER dataset for Dutch, that was presented in the
["Data Centric Domain Adaptation for Historical Text with OCR Errors"](https://github.com/stefan-it/historic-domain-adaptation-icdar) paper.
The data is available in their repository. We perform a hyper-parameter search for:
* Batch sizes: `[4, 8]`
* Learning rates: `[3e-5, 5e-5]`
* Number of epochs: `[5, 10]`
and report averaged F1-Score over 5 runs with different seeds. We also include [hmBERT](https://github.com/stefan-it/clef-hipe/blob/main/hlms.md) as baseline model.
Results:
| Model | F1-Score (Dev / Test)
| ------------------- | ---------------------
| hmBERT | (82.73) / 81.34
| Maerz et al. (2021) | - / 84.2
| Ours | (89.73) / 87.45
# License
All models are licensed under [MIT](LICENSE).
# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
We thank [Clemens Neudecker](https://github.com/cneud) for maintaining the amazing
[ALTO tools](https://github.com/cneud/alto-tools) that were used for parsing the Delpher Corpus XML files.
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
efederici/text2tags | 5441fe47bfbfea04e556fd043baa988b79b78081 | 2022-05-26T10:51:47.000Z | [
"pytorch",
"t5",
"text2text-generation",
"it",
"transformers",
"summarization",
"tags",
"Italian",
"autotrain_compatible"
] | summarization | false | efederici | null | efederici/text2tags | 54 | null | transformers | 5,853 | ---
language:
- it
tags:
- summarization
- tags
- Italian
inference:
parameters:
do_sample: False
min_length: 0
widget:
- text: "Nel 1924 la scrittrice Virginia Woolf affrontò nel saggio Mr Bennett e Mrs Brown il tema della costruzione e della struttura del romanzo, genere all’epoca considerato in declino a causa dell’incapacità degli autori e delle autrici di creare personaggi realistici. Woolf raccontò di aver a lungo osservato, durante un viaggio in treno da Richmond a Waterloo, una signora di oltre 60 anni seduta davanti a lei, chiamata signora Brown. Ne rimase affascinata, per la capacità di quella figura di evocare storie possibili e fare da spunto per un romanzo: «tutti i romanzi cominciano con una vecchia signora seduta in un angolo». Immagini come quella della signora Brown, secondo Woolf, «costringono qualcuno a cominciare, quasi automaticamente, a scrivere un romanzo». Nel saggio Woolf provò ad analizzare le tecniche narrative utilizzate da tre noti scrittori inglesi dell’epoca – H. G. Wells, John Galsworthy e Arnold Bennett – per comprendere perché le convenzioni stilistiche dell’Ottocento risultassero ormai inadatte alla descrizione dei «caratteri» umani degli anni Venti. In un lungo e commentato articolo del New Yorker, la critica letteraria e giornalista Parul Sehgal, a lungo caporedattrice dell’inserto culturale del New York Times dedicato alle recensioni di libri, ha provato a compiere un esercizio simile a quello di Woolf, chiedendosi come gli autori e le autrici di oggi tratterebbero la signora Brown. E ha immaginato che probabilmente quella figura non eserciterebbe su di loro una curiosità e un fascino legati alla sua incompletezza e al suo aspetto misterioso, ma con ogni probabilità trasmetterebbe loro l’indistinta e generica impressione di aver subìto un trauma."
example_title: "Virginia Woolf"
- text: "I lavori di ristrutturazione dell’interno della cattedrale di Notre-Dame a Parigi, seguiti al grande incendio che nel 2019 bruciò la guglia e buona parte del tetto, sono da settimane al centro di un acceso dibattito sui giornali francesi per via di alcune proposte di rinnovamento degli interni che hanno suscitato critiche e allarmi tra esperti e opinionisti conservatori. Il progetto ha ricevuto una prima approvazione dalla commissione nazionale competente, ma dovrà ancora essere soggetto a varie revisioni e ratifiche che coinvolgeranno tecnici e politici locali e nazionali, fino al presidente Emmanuel Macron. Ma le modifiche previste al sistema di viabilità per i visitatori, all’illuminazione, ai posti a sedere e alle opere d’arte che si vorrebbero esporre hanno portato alcuni critici a parlare di «parco a tema woke» e «Disneyland del politicamente corretto»."
example_title: "Notre-Dame"
---
# text2tags
The model has been trained on a collection of 28k news articles with tags. Its purpose is to create tags suitable for the given article. We can use this model also for information-retrieval purposes (GenQ), fine-tuning sentence-transformers for asymmetric semantic search.
<p align="center">
<img src="https://upload.wikimedia.org/wikipedia/commons/1/1a/Pieter_Bruegel_d._%C3%84._066.jpg" width="600"> </br>
Pieter Bruegel the Elder, The Fight Between Carnival and Lent, 1559
</p>
### Usage
Sample code with an article from IlPost:
```python
from transformers import T5ForConditionalGeneration,T5Tokenizer
model = T5ForConditionalGeneration.from_pretrained("efederici/text2tags")
tokenizer = T5Tokenizer.from_pretrained("efederici/text2tags")
article = '''
Da bambino era preoccupato che al mondo non ci fosse più nulla da scoprire. Ma i suoi stessi studi gli avrebbero dato torto: insieme a James Watson, nel 1953 Francis Crick strutturò il primo modello di DNA, la lunga sequenza di codici che identifica ogni essere vivente, rendendolo unico e diverso da tutti gli altri.
La scoperta gli valse il Nobel per la Medicina. È uscita in queste settimane per Codice la sua biografia, Francis Crick — Lo scopritore del DNA, scritta da Matt Ridley, che racconta vita e scienza dell'uomo che capì perché siamo fatti così.
'''
def tag(text: str):
""" Generates tags from given text """
text = text.strip().replace('\n', '')
text = 'summarize: ' + text
tokenized_text = tokenizer.encode(text, return_tensors="pt")
tags_ids = model.generate(tokenized_text,
num_beams=4,
no_repeat_ngram_size=2,
max_length=20,
early_stopping=True)
output = tokenizer.decode(tags_ids[0], skip_special_tokens=True)
return output.split(', ')
tags = tag(article)
print(tags)
```
## Longer documents
Assuming paragraphs are divided by: '\n\n'.
```python
from transformers import T5ForConditionalGeneration,T5Tokenizer
import itertools
import re
model = T5ForConditionalGeneration.from_pretrained("efederici/text2tags")
tokenizer = T5Tokenizer.from_pretrained("efederici/text2tags")
article = '''
Da bambino era preoccupato che al mondo non ci fosse più nulla da scoprire. Ma i suoi stessi studi gli avrebbero dato torto: insieme a James Watson, nel 1953 Francis Crick strutturò il primo modello di DNA, la lunga sequenza di codici che identifica ogni essere vivente, rendendolo unico e diverso da tutti gli altri.
La scoperta gli valse il Nobel per la Medicina. È uscita in queste settimane per Codice la sua biografia, Francis Crick — Lo scopritore del DNA, scritta da Matt Ridley, che racconta vita e scienza dell'uomo che capì perché siamo fatti così.
'''
def words(text):
input_str = text
output_str = re.sub('[^A-Za-z0-9]+', ' ', input_str)
return output_str.split()
def is_subset(text1, text2):
return all(tag in words(text1.lower()) for tag in text2.split())
def cleaning(text, tags):
return [tag for tag in tags if is_subset(text, tag)]
def get_texts(text, max_len):
texts = list(filter(lambda x : x != '', text.split('\n\n')))
lengths = [len(tokenizer.encode(paragraph)) for paragraph in texts]
output = []
for i, par in enumerate(texts):
index = len(output)
if index > 0 and lengths[i] + len(tokenizer.encode(output[index-1])) <= max_len:
output[index-1] = "".join(output[index-1] + par)
else:
output.append(par)
return output
def get_tags(text, generate_kwargs):
input_text = 'summarize: ' + text.strip().replace('\n', ' ')
tokenized_text = tokenizer.encode(input_text, return_tensors="pt")
with torch.no_grad():
tags_ids = model.generate(tokenized_text, **generate_kwargs)
output = []
for tags in tags_ids:
cleaned = cleaning(
text,
list(set(tokenizer.decode(tags, skip_special_tokens=True).split(', ')))
)
output.append(cleaned)
return list(set(itertools.chain(*output)))
def tag(text, max_len, generate_kwargs):
texts = get_texts(text, max_len)
all_tags = [get_tags(text, generate_kwargs) for text in texts]
flatten_tags = itertools.chain(*all_tags)
return list(set(flatten_tags))
params = {
"min_length": 0,
"max_length": 30,
"no_repeat_ngram_size": 2,
"num_beams": 4,
"early_stopping": True,
"num_return_sequences": 4,
}
tags = tag(article, 512, params)
print(tags)
```
### Overview
- Model: T5 ([it5-small](https://huggingface.co/gsarti/it5-small))
- Language: Italian
- Downstream-task: Summarization (for topic tagging)
- Training data: Custom dataset
- Code: See example
- Infrastructure: 1x T4 |
flax-community/gpt-neo-125M-apps | 9cf90328fea0748637cb8b1cadc205ff94fb99b5 | 2021-09-22T08:25:34.000Z | [
"pytorch",
"jax",
"gpt_neo",
"text-generation",
"en",
"python",
"dataset:apps",
"arxiv:2107.03374",
"transformers",
"code_synthesis",
"license:mit"
] | text-generation | false | flax-community | null | flax-community/gpt-neo-125M-apps | 54 | null | transformers | 5,854 | ---
language:
- en
- python
license: mit
tags:
- gpt_neo
- code_synthesis
datasets:
- apps
---
# GPT-Neo-125M-APPS
> **Please refer to our new [GitHub Wiki](https://github.com/ncoop57/gpt-code-clippy/wiki) which documents our efforts in detail in creating the open source version of GitHub Copilot**
## Model Description
GPT-Neo-125M-APPS is a GPT-Neo-125M finetuned on APPS dataset. This model is specialized to solve programming tasks.
## Training data
The model is trained on the [Automated Programming Progress Standard (APPS) dataset](https://github.com/hendrycks/apps). The dataset consists of 10,000 coding problems in total, with 131,836 test cases for checking solutions and 232,444 ground-truth solutions written by humans. Problems can be complicated, as the average length of a problem is 293.2 words. The data are split evenly into training and test sets, with 5,000 problems each.
## Training procedure
The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_apps.py).
Training is done for 5 epochs using AdamW optimizer and leaner decay learning rate schedule with 800 warmup steps. To reproduce the training one can use this command with the above script:
```bash
python run_clm_apps.py \
--output_dir $HOME/gpt-neo-125M-apps \
--model_name_or_path EleutherAI/gpt-neo-125M \
--dataset_name $HOME/gpt-code-clippy/data_processing/apps.py \
--dataset_config_name formatted \
--do_train --do_eval \
--block_size="1024" \
--per_device_train_batch_size="16" \
--per_device_eval_batch_size="16" \
--preprocessing_num_workers="16" \
--learning_rate="8e-5" \
--warmup_steps="800" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--weight_decay="0.1" \
--overwrite_output_dir \
--num_train_epochs="5" \
--logging_steps="50" \
--eval_steps="2000" \
--report_to="wandb" \
--dtype="bfloat16" \
--save_strategy epoch \
--gradient_accumulation_steps 2 \
```
## Intended Use and Limitations
The model is finetuned to solve programming problems given a text description and optional starter code.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-neo-125M-apps")
tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-neo-125M-apps")
prompt = """
A function to greet user. Given a user name it should say hello
def greet(name):
ANSWER:
"""
input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device)
start = input_ids.size(1)
out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2,
early_stopping=True, eos_token_id=tokenizer.eos_token_id, )
print(tokenizer.decode(out[0][start:]))
```
### Limitations and Biases
The model is intended to be used for research purposes and comes with no guarantees of quality of generated code.
The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**.
1. **Over-reliance:** This model may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using this language model.
2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software.
5. **Biases:** The model is trained on data containing prompt questions formatted in specific way. The performance of the model can be worse if the prompt
formatting is different from that used in APPS dataset.
GPT-CC is finetuned GPT-Neo and might have inhereted biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details.
## Eval results
Coming soon...
|
google/tapas-medium-finetuned-wtq | ab7607cdf73b129a5f14db098fef3a90e280fe92 | 2022-07-14T10:14:59.000Z | [
"pytorch",
"tf",
"tapas",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2004.02349",
"arxiv:2010.00571",
"arxiv:1508.00305",
"transformers",
"license:apache-2.0"
] | table-question-answering | false | google | null | google/tapas-medium-finetuned-wtq | 54 | null | transformers | 5,855 | ---
language: en
tags:
- tapas
- table-question-answering
license: apache-2.0
datasets:
- wikitablequestions
---
# TAPAS medium model fine-tuned on WikiTable Questions (WTQ)
This model has 2 versions which can be used. The default version corresponds to the `tapas_wtq_wikisql_sqa_inter_masklm_medium_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), [WikiSQL](https://github.com/salesforce/WikiSQL) and finally [WTQ](https://github.com/ppasupat/WikiTableQuestions). It uses relative position embeddings (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is:
- `no_reset`, which corresponds to `tapas_wtq_wikisql_sqa_inter_masklm_medium` (intermediate pre-training, absolute position embeddings).
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Results
Size | Reset | Dev Accuracy | Link
-------- | --------| -------- | ----
LARGE | noreset | 0.5062 | [tapas-large-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/no_reset)
LARGE | reset | 0.5097 | [tapas-large-finetuned-wtq](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/main)
BASE | noreset | 0.4525 | [tapas-base-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/no_reset)
BASE | reset | 0.4638 | [tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/main)
**MEDIUM** | **noreset** | **0.4324** | [tapas-medium-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/no_reset)
**MEDIUM** | **reset** | **0.4324** | [tapas-medium-finetuned-wtq](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/main)
SMALL | noreset | 0.3681 | [tapas-small-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/no_reset)
SMALL | reset | 0.3762 | [tapas-small-finetuned-wtq](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/main)
MINI | noreset | 0.2783 | [tapas-mini-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/no_reset)
MINI | reset | 0.2854 | [tapas-mini-finetuned-wtq](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/main)
TINY | noreset | 0.0823 | [tapas-tiny-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/no_reset)
TINY | reset | 0.1039 | [tapas-tiny-finetuned-wtq](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/main)
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head and aggregation head on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on SQa, WikiSQL and finally WTQ.
## Intended uses & limitations
You can use this model for answering questions related to a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Question [SEP] Flattened table [SEP]
```
The authors did first convert the WTQ dataset into the format of SQA using automatic conversion scripts.
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 50,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 10 hours. The optimizer used is Adam with a learning rate of 1.93581e-5, and a warmup
ratio of 0.128960. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the
`select_one_column` parameter of `TapasConfig`. See the [paper](https://arxiv.org/abs/2004.02349) for more details (tables 11 and
12).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@article{DBLP:journals/corr/PasupatL15,
author = {Panupong Pasupat and
Percy Liang},
title = {Compositional Semantic Parsing on Semi-Structured Tables},
journal = {CoRR},
volume = {abs/1508.00305},
year = {2015},
url = {http://arxiv.org/abs/1508.00305},
archivePrefix = {arXiv},
eprint = {1508.00305},
timestamp = {Mon, 13 Aug 2018 16:47:37 +0200},
biburl = {https://dblp.org/rec/journals/corr/PasupatL15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
gpt2-adstext/gpt2-adstext | 78f2295bfb2535a1a94e89aace15f07845d6fb7c | 2021-05-21T16:17:48.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | gpt2-adstext | null | gpt2-adstext/gpt2-adstext | 54 | null | transformers | 5,856 | Entry not found |
henryu-lin/t5-3b-samsum-deepspeed | 7feb6eb6ffb6218e7fb394c0f445bb946c255dc4 | 2021-07-08T06:45:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:samsum",
"transformers",
"azureml",
"summarization",
"deepspeed",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | henryu-lin | null | henryu-lin/t5-3b-samsum-deepspeed | 54 | null | transformers | 5,857 | ---
language: en
tags:
- azureml
- t5
- summarization
- deepspeed
license: apache-2.0
datasets:
- samsum
model-index:
- name: t5-3b-samsum-deepspeed
results:
- task:
name: Abstractive Text Summarization
type: abstractive-text-summarization
dataset:
name: "SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization"
type: samsum
widget:
- text: |
Henry: Hey, is Nate coming over to watch the movie tonight?
Kevin: Yea, he said he'll be arriving a bit later at around 7 since he gets off of work at 6. Have you taken out the garbage yet? It's starting to make the kitchen really smell.
Henry: Oh I forgot. I'll do that once I'm finished with my assignment for my math class.
Kevin: Yea, you should take it out as soon as possible. And also, Nate is bringing his girlfriend too.
Henry: Nice, I'm really looking forward to seeing them again.
---
## `t5-3b-samsum-deepspeed`
This model was trained using Microsoft's `AzureML` and `DeepSpeed`'s ZeRO 2 optimization. It was fine-tuned on the `SAMSum` corpus from `t5-3b` checkpoint.
More information on the fine-tuning process (includes samples and benchmarks):
*(currently still WIP, updates coming soon: 7/6/21~7/9/21)*
## Resource Usage
These results are retrieved from AzureML Studio's resource monitoring module. All experiments were ran on AzureML's low priority clusters.
| key | value |
| --- | ----- |
| AzureML SKU | ND40rs_v2 (8 X V100 32GB) |
| Region | US West 2 |
| Run Duration | 43m 51.05s |
| Compute Cost (LowPriority/Dedicated) | $3.22/$16.10 (USD) |
| Average CPU Utilization | 46.0% |
| Average GPU Utilization | 56.9% |
| GPU Memory Usage (Avg/Peak) | 26.77/30.49 (GB) |
| Total GPU Energy Usage | 2448.69 (kJ) |
*Compute cost is calculated from run duration and SKU's price per hour. Updated SKU pricing could be found here: https://azure.microsoft.com/en-us/pricing/details/machine-learning/
*Peak memory usage is calculated from average peak across all utilized GPUs.
### Carbon Emissions
These results are obtained using `codecarbon`. The carbon emission is estimated from training runtime only (excluding setup and evaluation runtime).
CodeCarbon: https://github.com/mlco2/codecarbon
| key | value |
| --- | ----- |
| timestamp | 2021-07-06T21:57:39 |
| duration | 1841.4621863365173 |
| emissions | 0.17802492531467784 |
| energy_consumed | 0.5982020339874927 |
| country_name | USA |
| region | Washington |
| cloud_provider | azure |
| cloud_region | westus2 |
## Hyperparameters
```yaml
fp16: True
per device batch size: 2
effective batch size: 16
epoch: 3.0
learning rate: 3e-5
weight decay: 0.0
seed: 1
```
*Same `per device batch size` for evaluations
### DeepSpeed
Optimizer = `AdamW`, Scheduler = `WarmupDecayLR`, Offload = `none`
```json
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 1000000000,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 1000000000,
"contiguous_gradients": true
}
```
## Usage
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="henryu-lin/t5-3b-samsum-deepspeed")
conversation = '''Henry: Hey, is Nate coming over to watch the movie tonight?
Kevin: Yea, he said he'll be arriving a bit later at around 7 since he gets off of work at 6. Have you taken out the garbage yet? It's starting to make the kitchen really smell.
Henry: Oh I forgot. I'll do that once I'm finished with my assignment for my math class.
Kevin: Yea, you should take it out as soon as possible. And also, Nate is bringing his girlfriend too.
Henry: Nice, I'm really looking forward to seeing them again.
'''
summarizer(conversation)
```
## Results
| ROUGE | Score |
| ----- | ----- |
| eval_rouge1 | 54.7875 |
| eval_rouge2 | 30.565 |
| eval_rougeL | 45.7625 |
| eval_rougeLsum | 50.3915 |
| predict_rouge1 | 53.6628 |
| predict_rouge2 | 29.0196 |
| predict_rougeL | 45.1257 |
| predict_rougeLsum | 49.171 |
| Metric | Value |
| ------ | ----- |
| eval_gen_len | 25.3399 |
| predict_gen_len | 24.9133 |
| train_loss | 1.1206104169494209 |
| eval_loss | 1.0732421875 |
| predict_loss | 1.087890625 |
| train_runtime | 1841.3751 |
| train_samples | 14732 |
| train_samples_per_second | 24.002 |
| train_steps_per_second | 1.501 |
| eval_runtime | 163.8357 |
| eval_samples | 818 |
| eval_samples_per_second | 4.993 |
| eval_steps_per_second | 0.317 |
| predict_runtime | 168.8245 |
| predict_samples | 819 |
| predict_samples_per_second | 4.851 |
| predict_steps_per_second | 0.308 |
| total_steps | 2763 |
| total_flos | 1.84452086400811e+17 |
|
inywer/DialoGPT-small-awazimuruk | 9438b947d184d8e46981bf8a7c5e842983e9aabd | 2021-10-05T21:39:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | inywer | null | inywer/DialoGPT-small-awazimuruk | 54 | null | transformers | 5,858 | ---
tags:
- conversational
---
# awazimuruk DialoGPT Model |
kornosk/bert-political-election2020-twitter-mlm | 674022950a54308a1cbf9394c80056b59b100aed | 2022-05-10T04:45:45.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"en",
"transformers",
"twitter",
"masked-token-prediction",
"election2020",
"politics",
"license:gpl-3.0",
"autotrain_compatible"
] | fill-mask | false | kornosk | null | kornosk/bert-political-election2020-twitter-mlm | 54 | 2 | transformers | 5,859 | ---
language: "en"
tags:
- twitter
- masked-token-prediction
- election2020
- politics
license: "gpl-3.0"
---
# Pre-trained BERT on Twitter US Political Election 2020
Pre-trained weights for [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
We use the initialized weights from BERT-base (uncased) or `bert-base-uncased`.
# Training Data
This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election.
# Training Objective
This model is initialized with BERT-base and trained with normal MLM objective.
# Usage
This pre-trained language model **can be fine-tunned to any downstream task (e.g. classification)**.
Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail.
```python
from transformers import BertTokenizer, BertForMaskedLM, pipeline
import torch
# Choose GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Select mode path here
pretrained_LM_path = "kornosk/bert-political-election2020-twitter-mlm"
# Load model
tokenizer = BertTokenizer.from_pretrained(pretrained_LM_path)
model = BertForMaskedLM.from_pretrained(pretrained_LM_path)
# Fill mask
example = "Trump is the [MASK] of USA"
fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
# Use following line instead of the above one does not work.
# Huggingface have been updated, newer version accepts a string of model name instead.
fill_mask = pipeline('fill-mask', model=pretrained_LM_path, tokenizer=tokenizer)
outputs = fill_mask(example)
print(outputs)
# See embeddings
inputs = tokenizer(example, return_tensors="pt")
outputs = model(**inputs)
print(outputs)
# OR you can use this model to train on your downstream task!
# Please consider citing our paper if you feel this is useful :)
```
# Reference
- [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Citation
```bibtex
@inproceedings{kawintiranon2021knowledge,
title={Knowledge Enhanced Masked Language Model for Stance Detection},
author={Kawintiranon, Kornraphop and Singh, Lisa},
booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
year={2021},
publisher={Association for Computational Linguistics},
url={https://www.aclweb.org/anthology/2021.naacl-main.376}
}
``` |
navteca/tapas-large-finetuned-wtq | cd7feb8b379e08187f8927debec340fa05ca3715 | 2021-08-09T12:46:26.000Z | [
"pytorch",
"tapas",
"table-question-answering",
"en",
"dataset:sqa",
"dataset:wikisql",
"dataset:wtq",
"arxiv:2004.02349",
"transformers",
"license:mit"
] | table-question-answering | false | navteca | null | navteca/tapas-large-finetuned-wtq | 54 | null | transformers | 5,860 | ---
datasets:
- sqa
- wikisql
- wtq
language: en
license: mit
pipeline_tag: table-question-answering
tags:
- tapas
- table-question-answering
---
# TAPAS large model fine-tuned on WikiTable Questions (WTQ)
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table.
[TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349)
## Training Data
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), [WikiSQL](https://github.com/salesforce/WikiSQL) and finally [WTQ](https://github.com/ppasupat/WikiTableQuestions).
It can be used for answering questions related to a table.
## Usage and Performance
The trained model can be used like this:
```python
from transformers import AutoModelForTableQuestionAnswering, AutoTokenizer, pipeline
# Load model & tokenizer
tapas_model = AutoModelForTableQuestionAnswering.from_pretrained('navteca/tapas-large-finetuned-wtq')
tapas_tokenizer = AutoTokenizer.from_pretrained('navteca/tapas-large-finetuned-wtq')
# Get predictions
nlp = pipeline('table-question-answering', model=tapas_model, tokenizer=tapas_tokenizer)
result = nlp({
'table': {
'Repository': [
'Transformers',
'Datasets',
'Tokenizers'
],
'Stars': [
'36542',
'4512',
'3934'
],
'Contributors': [
'651',
'77',
'34'
],
'Programming language': [
'Python',
'Python',
'Rust, Python and NodeJS'
]
},
'query': 'How many stars does the transformers repository have?'
})
print(result)
#{
# "answer": "SUM > 36542",
# "coordinates": [
# [
# 0,
# 1
# ]
# ],
# "cells": [
# "36542"
# ],
# "aggregator": "SUM"
#}
``` |
pparasurama/raceBERT-ethnicity | babd5c78b799aaac6346b7d934b510674ae27d63 | 2021-11-09T20:42:29.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | pparasurama | null | pparasurama/raceBERT-ethnicity | 54 | 1 | transformers | 5,861 | Entry not found |
speechbrain/asr-crdnn-commonvoice-it | 5eef440c7208564abdc2a55cabd61ca824a4ea4e | 2021-11-30T00:37:30.000Z | [
"it",
"dataset:common_voice",
"arxiv:2106.04624",
"speechbrain",
"automatic-speech-recognition",
"CTC",
"Attention",
"pytorch",
"license:apache-2.0"
] | automatic-speech-recognition | false | speechbrain | null | speechbrain/asr-crdnn-commonvoice-it | 54 | null | speechbrain | 5,862 | ---
language: "it"
thumbnail:
tags:
- automatic-speech-recognition
- CTC
- Attention
- pytorch
- speechbrain
license: "apache-2.0"
datasets:
- common_voice
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# CRDNN with CTC/Attention trained on CommonVoice Italian (No LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on CommonVoice (IT) within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The performance of the model is the following:
| Release | Test CER | Test WER | GPUs |
|:-------------:|:--------------:|:--------------:| :--------:|
| 07-03-21 | 5.40 | 16.61 | 2xV100 16GB |
## Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions (train.tsv) of CommonVoice (IT).
- Acoustic model (CRDNN + CTC/Attention). The CRDNN architecture is made of
N blocks of convolutional neural networks with normalization and pooling on the
frequency domain. Then, a bidirectional LSTM is connected to a final DNN to obtain
the final acoustic representation that is given to the CTC and attention decoders.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in Italian)
```python
from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-crdnn-commonvoice-it", savedir="pretrained_models/asr-crdnn-commonvoice-it")
asr_model.transcribe_file("speechbrain/asr-crdnn-commonvoice-it/example-it.wav")
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
## Parallel Inference on a Batch
Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model.
### Training
The model was trained with SpeechBrain (Commit hash: '986a2175').
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/CommonVoice/ASR/seq2seq
python train.py hparams/train_it.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1asxPsY1EBGHIpIFhBtUi9oiyR6C7gC0g?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
|
uer/bart-chinese-4-768-cluecorpussmall | 9736dc35093a9815a4c3fdb254b8a0f22b793f73 | 2021-10-08T14:46:07.000Z | [
"pytorch",
"bart",
"text2text-generation",
"Chinese",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uer | null | uer/bart-chinese-4-768-cluecorpussmall | 54 | 1 | transformers | 5,863 | ---
language: Chinese
datasets: CLUECorpusSmall
widget:
- text: "作为电子[MASK]的平台,京东绝对是领先者。如今的刘强[MASK]已经是身价过[MASK]的老板。"
---
# Chinese BART
## Model description
This model is pre-trained by [UER-py](https://arxiv.org/abs/1909.05658).
## How to use
You can use this model directly with a pipeline for text2text generation :
```python
>>> from transformers import BertTokenizer, BartForConditionalGeneration, Text2TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/bart-chinese-4-768-cluecorpussmall")
>>> model = BartForConditionalGeneration.from_pretrained("uer/bart-chinese-4-768-cluecorpussmall")
>>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
>>> text2text_generator("中国的首都是[MASK]京", max_length=50, do_sample=False)
[{'generated_text': '中 国 的 首 都 是 北 京'}]
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) Common Crawl and some short messages are used as training data.
## Training procedure
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 512.
we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bart_from_uer_to_huggingface.py --input_model_path cluecorpussmall_bart_medium_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 4
```
### BibTeX entry and citation info
```
@article{lewis2019bart,
title={Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension},
author={Lewis, Mike and Liu, Yinhan and Goyal, Naman and Ghazvininejad, Marjan and Mohamed, Abdelrahman and Levy, Omer and Stoyanov, Ves and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:1910.13461},
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
``` |
w11wo/sundanese-gpt2-base | fe065ade05b3b0a104e7ec5799bc0bf306feaa7d | 2022-02-26T13:15:02.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"su",
"dataset:mc4",
"dataset:cc100",
"dataset:oscar",
"dataset:wikipedia",
"transformers",
"sundanese-gpt2-base",
"license:mit"
] | text-generation | false | w11wo | null | w11wo/sundanese-gpt2-base | 54 | 1 | transformers | 5,864 | ---
language: su
tags:
- sundanese-gpt2-base
license: mit
datasets:
- mc4
- cc100
- oscar
- wikipedia
widget:
- text: "Nami abdi Budi, ti Indonésia"
---
## Sundanese GPT-2 Base
Sundanese GPT-2 Base is a causal language model based on the [OpenAI GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model. It was trained on four datasets: [OSCAR](https://hf.co/datasets/oscar)'s `unshuffled_deduplicated_su` subset, the Sundanese [mC4](https://hf.co/datasets/mc4) subset, the Sundanese [CC100](https://hf.co/datasets/cc100) subset, and Sundanese [Wikipedia](https://su.wikipedia.org/).
10% of the dataset is kept for evaluation purposes. The model was trained from scratch and achieved an evaluation loss of 3.61 and an evaluation perplexity of 36.97.
This model was trained using HuggingFace's Flax framework. All necessary scripts used for training could be found in the [Files and versions](https://hf.co/w11wo/sundanese-gpt2-base/tree/main) tab, as well as the [Training metrics](https://hf.co/w11wo/sundanese-gpt2-base/tensorboard) logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| --------------------- | ------- | ----- | ------------------------------------- |
| `sundanese-gpt2-base` | 124M | GPT-2 | OSCAR, mC4, CC100, Wikipedia (758 MB) |
## Evaluation Results
The model was trained for 50 epochs and the following is the final result once the training ended.
| train loss | valid loss | valid PPL | total time |
| ---------- | ---------- | --------- | ---------- |
| 2.436 | 3.61 | 36.97 | 7:1:54 |
## How to Use
### As Causal Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/sundanese-gpt2-base"
nlp = pipeline(
"text-generation",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Nami abdi Budi, ti Indonésia")
```
### Feature Extraction in PyTorch
```python
from transformers import GPT2Model, GPT2TokenizerFast
pretrained_name = "w11wo/sundanese-gpt2-base"
model = GPT2Model.from_pretrained(pretrained_name)
tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_name)
prompt = "Nami abdi Budi, ti Indonésia"
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do consider the biases which came from all four datasets that may be carried over into the results of this model.
## Author
Sundanese GPT-2 Base was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/).
## Citation Information
```bib
@article{rs-907893,
author = {Wongso, Wilson
and Lucky, Henry
and Suhartono, Derwin},
journal = {Journal of Big Data},
year = {2022},
month = {Feb},
day = {26},
abstract = {The Sundanese language has over 32 million speakers worldwide, but the language has reaped little to no benefits from the recent advances in natural language understanding. Like other low-resource languages, the only alternative is to fine-tune existing multilingual models. In this paper, we pre-trained three monolingual Transformer-based language models on Sundanese data. When evaluated on a downstream text classification task, we found that most of our monolingual models outperformed larger multilingual models despite the smaller overall pre-training data. In the subsequent analyses, our models benefited strongly from the Sundanese pre-training corpus size and do not exhibit socially biased behavior. We released our models for other researchers and practitioners to use.},
issn = {2693-5015},
doi = {10.21203/rs.3.rs-907893/v1},
url = {https://doi.org/10.21203/rs.3.rs-907893/v1}
}
``` |
wietsedv/bert-base-dutch-cased-finetuned-lassysmall-pos | 3b28296bc7d0f1d5c2099279cff7462b64458117 | 2021-05-20T09:08:08.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/bert-base-dutch-cased-finetuned-lassysmall-pos | 54 | null | transformers | 5,865 | Entry not found |
IIC/roberta-base-spanish-sqac | 642575580b06c3e71f56bc520b610451b220ab8c | 2022-04-02T15:11:03.000Z | [
"pytorch",
"roberta",
"question-answering",
"es",
"dataset:PlanTL-GOB-ES/SQAC",
"arxiv:2107.07253",
"transformers",
"model-index",
"autotrain_compatible"
] | question-answering | false | IIC | null | IIC/roberta-base-spanish-sqac | 54 | 1 | transformers | 5,866 | ---
language:
- es
tags:
- question-answering # Example: audio
datasets:
- PlanTL-GOB-ES/SQAC
metrics:
- f1
# Optional. Add this if you want to encode your eval results in a structured way.
model-index:
- name: roberta-base-spanish_sqac
results:
- task:
type: question-answering # Required. Example: automatic-speech-recognition
name: question-answering # Optional. Example: Speech Recognition
dataset:
type: SQAC # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: PlanTL-GOB-ES/SQAC # Required. Example: Common Voice zh-CN
args: es # Optional. Example: zh-CN
metrics:
- type: f1
value: 86.6
name: f1
---
This model was trained on the [SQAC](https://huggingface.co/datasets/BSC-TeMU/SQAC) dataset, provided by [BSC](https://www.bsc.es/). It is a question-answering dataset originally developed in Spanish. As for the model, it is a fine-tuned version of [MarIA-Roberta](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne), a spanish roberta also developed by BSC under the project MarIA.
For training the model, we followed the recommendations of the own authors in [their paper](https://arxiv.org/abs/2107.07253), performing a full grid search over the hyperparameter space provided in the paper, and selected the best model based on eval\_loss.
You can use the model like this:
```python
from transformers import RobertaTokenizer, RobertaForQuestionAnswering
import torch
tokenizer = RobertaTokenizer.from_pretrained("IIC/roberta-base-spanish-sqac")
model = RobertaForQuestionAnswering.from_pretrained("IIC/roberta-base-spanish-sqac")
question, text = "Quién es el padre de Luke Skywalker?", "En la famosa película, Darth Veider le dice a Luke Skywalker aquella frase que todos recordamos: yo soy tu padre."
inputs = tokenizer(question, text, return_tensors="pt")
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
```
### Contributions
Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this model. |
mary905el/rugpt3large_neuro_chgk | 75a2e2dde695cf432d5933ed485ad7f388e1782a | 2022-04-26T06:34:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ru",
"transformers",
"PyTorch",
"Transformers"
] | text-generation | false | mary905el | null | mary905el/rugpt3large_neuro_chgk | 54 | null | transformers | 5,867 | ---
language:
- ru
tags:
- PyTorch
- Transformers
- text-generation
widget:
- text: "Известный человек"
inference:
parameters:
max_length: 60
do_sample: True
temperature: 0.6
no_repeat_ngram_size: 2
---
This is https://huggingface.co/sberbank-ai/rugpt3large_based_on_gpt2 model, fine-tuned on the questions of CHGK (Что? Где? Когда? https://db.chgk.info/)
Dataset: 75 000 questions from 2000-2019
Trained for 5 epochs |
ifrz/wav2vec2-large-xlsr-galician | 0e2118b3fcc822fca186a3a2c2a2253ff7520259 | 2022-07-13T07:15:21.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gl",
"dataset:OpenSLR 77",
"dataset:mozilla-foundation common_voice_8_0",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ifrz | null | ifrz/wav2vec2-large-xlsr-galician | 54 | null | transformers | 5,868 | # wav2vec2-large-xlsr-galician
---
language: gl
datasets:
- OpenSLR 77
- mozilla-foundation common_voice_8_0
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Galician wav2vec2-large-xlsr-galician
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset_1:
name: OpenSLR
type: openslr
args: gl
dataset_2:
name: mozilla-foundation
type: common voice
args: gl
metrics:
- name: Test WER
type: wer
value: 7.12
---
# Model
Fine-tuned model for Galician language
Based on the [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) self-supervised model
Fine-tune with audio labelled from [OpenSLR](https://openslr.org/77/) and Mozilla [Common_Voice](https://commonvoice.mozilla.org/gl) (both datasets previously refined)
Check training metrics to see results
# Testing
Make sure that the audio speech input is sampled at 16kHz (mono).
```python
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
model = Wav2Vec2ForCTC.from_pretrained("ifrz/wav2vec2-large-xlsr-galician")
processor = Wav2Vec2Processor.from_pretrained("ifrz/wav2vec2-large-xlsr-galician")
# Reading taken audio clip
import librosa, torch
audio, rate = librosa.load("./gl_test_1.wav", sr = 16000)
# Taking an input value
input_values = processor(audio, sampling_rate=16_000, return_tensors = "pt", padding="longest").input_values
# Storing logits (non-normalized prediction values)
logits = model(input_values).logits
# Storing predicted ids
prediction = torch.argmax(logits, dim = -1)
# Passing the prediction to the tokenzer decode to get the transcription
transcription = processor.batch_decode(prediction)[0]
print(transcription)
``` |
doc2query/msmarco-portuguese-mt5-base-v1 | c92f7664af7e81c16fccbf45abbf27434e62a1f2 | 2022-04-29T12:08:25.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"pt",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | doc2query | null | doc2query/msmarco-portuguese-mt5-base-v1 | 54 | 2 | transformers | 5,869 | ---
language: pt
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python é uma linguagem de programação de alto nível, interpretada de script, imperativa, orientada a objetos, funcional, de tipagem dinâmica e forte. Foi lançada por Guido van Rossum em 1991. Atualmente, possui um modelo de desenvolvimento comunitário, aberto e gerenciado pela organização sem fins lucrativos Python Software Foundation. Apesar de várias partes da linguagem possuírem padrões e especificações formais, a linguagem, como um todo, não é formalmente especificada. O padrão de facto é a implementação CPython."
license: apache-2.0
---
# doc2query/msmarco-portuguese-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-portuguese-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python é uma linguagem de programação de alto nível, interpretada de script, imperativa, orientada a objetos, funcional, de tipagem dinâmica e forte. Foi lançada por Guido van Rossum em 1991. Atualmente, possui um modelo de desenvolvimento comunitário, aberto e gerenciado pela organização sem fins lucrativos Python Software Foundation. Apesar de várias partes da linguagem possuírem padrões e especificações formais, a linguagem, como um todo, não é formalmente especificada. O padrão de facto é a implementação CPython."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
joaogante/test_audio | d08c0a24b1982bf1f6ee06e87de5f5f178ec7300 | 2022-05-31T11:55:52.000Z | [
"pytorch",
"speech_to_text",
"automatic-speech-recognition",
"fr",
"en",
"dataset:covost2",
"arxiv:2010.05171",
"arxiv:1912.06670",
"arxiv:1904.08779",
"transformers",
"audio",
"speech-translation",
"license:mit"
] | automatic-speech-recognition | false | joaogante | null | joaogante/test_audio | 54 | null | transformers | 5,870 | ---
language:
- fr
- en
datasets:
- covost2
tags:
- audio
- speech-translation
- automatic-speech-recognition
license: mit
pipeline_tag: automatic-speech-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
---
# S2T-SMALL-COVOST2-FR-EN-ST
`s2t-small-covost2-fr-en-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end French speech to English text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-covost2-fr-en-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-covost2-fr-en-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=48_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-covost2-fr-en-st is trained on French-English subset of [CoVoST2](https://github.com/facebookresearch/covost).
CoVoST is a large-scale multilingual ST corpus based on [Common Voice](https://arxiv.org/abs/1912.06670), created to to foster
ST research with the largest ever open dataset
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using character based SentencePiece vocab.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
CoVOST2 test results for fr-en (BLEU score): 26.25
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
north/demo-nynorsk-base | 8954f48e96bc98db66a97d39f87dfa5fca4a6a20 | 2022-05-29T20:09:12.000Z | [
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"no",
"transformers",
"translation",
"license:cc-by-4.0",
"autotrain_compatible"
] | translation | false | north | null | north/demo-nynorsk-base | 54 | null | transformers | 5,871 | ---
language: no
tags:
- translation
widget:
- text: "En av de vanskeligste oppgavene når man oversetter fra bokmål til nynorsk, er å passe på at man bruker riktige pronomen. Man kan for eksempel si at man eier en bil og at den er rød."
- text: "Arbeidsmiljøloven har også som formål å sikre et arbeidsmiljø som gir grunnlag for en helsefremmende og meningsfylt arbeidssituasjon, og bidra til et inkluderende arbeidsliv."
- text: "Alle søknader behandles konfidensielt."
- text: "Kommunens nettsider henviser til kommunens vedtak."
license: cc-by-4.0
---
# Nynorsk Translator
This demo translates text for Norwegian Bokmål to Norwegian Nynorsk.
The Nynorsk Translator is finetuned from North-T5. It is a simple base model just for demo purposes. Please do not use it for translating larger amounts of text. |
tinkoff-ai/response-toxicity-classifier-base | 9f849fade5bfa7c027ec968cec7afbe4ad652107 | 2022-07-22T05:43:10.000Z | [
"pytorch",
"bert",
"text-classification",
"ru",
"transformers",
"russian",
"pretraining",
"conversational",
"license:mit"
] | text-classification | false | tinkoff-ai | null | tinkoff-ai/response-toxicity-classifier-base | 54 | null | transformers | 5,872 | ---
language: ["ru"]
tags:
- russian
- pretraining
- conversational
license: mit
widget:
- text: "[CLS] привет [SEP] привет! [SEP] как дела? [RESPONSE_TOKEN] норм"
example_title: "Dialog example 1"
- text: "[CLS] привет [SEP] привет! [SEP] как дела? [RESPONSE_TOKEN] ты *****"
example_title: "Dialog example 2"
---
# response-toxicity-classifier-base
[BERT classifier from Skoltech](https://huggingface.co/Skoltech/russian-inappropriate-messages), finetuned on contextual data with 4 labels.
# Training
[*Skoltech/russian-inappropriate-messages*](https://huggingface.co/Skoltech/russian-inappropriate-messages) was finetuned on a multiclass data with four classes (*check the exact mapping between idx and label in* `model.config`).
1) OK label — the message is OK in context and does not intent to offend or somehow harm the reputation of a speaker.
2) Toxic label — the message might be seen as a offensive one in given context.
3) Severe toxic label — the message is offencive, full of anger and was written to provoke a fight or any other discomfort
4) Risks label — the message touches on sensitive topics and can harm the reputation of the speaker (i.e. religion, politics)
The model was finetuned on a soon-to-be-posted dialogs datasets.
# Evaluation results
Model achieves the following results on the validation datasets (will be posted soon):
|| OK - F1-score | TOXIC - F1-score | SEVERE TOXIC - F1-score | RISKS - F1-score |
|---------|---------------|------------------|-------------------------|------------------|
|internet dialogs | 0.896 | 0.348 | 0.490 | 0.591 |
|chatbot dialogs | 0.940 | 0.295 | 0.729 | 0.46 |
# Use in transformers
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('tinkoff-ai/response-toxicity-classifier-base')
model = AutoModelForSequenceClassification.from_pretrained('tinkoff-ai/response-toxicity-classifier-base')
inputs = tokenizer('[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм, у тя как?', max_length=128, add_special_tokens=False, return_tensors='pt')
with torch.inference_mode():
logits = model(**inputs).logits
probas = torch.softmax(logits, dim=-1)[0].cpu().detach().numpy()
```
The work was done during internship at Tinkoff by [Nikita Stepanov](https://huggingface.co/nikitast).
|
nytimesrd/paraphrase-MiniLM-L6-v2 | 8aa77c41847a7ce070af41af6f0cf838b08d1969 | 2022-06-06T17:21:31.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | nytimesrd | null | nytimesrd/paraphrase-MiniLM-L6-v2 | 54 | null | sentence-transformers | 5,873 | ---
pipeline_tag: feature-extraction
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/paraphrase-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
This is a clone of the original model, with `pipeline_tag` metadata changed to `feature-extraction`, so it can just return the embedded vector. Otherwise it is unchanged.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-MiniLM-L6-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
```
|
Hamzaaa/wav2vec2-base-finetuned-Tess | 505a1dc87f18d6a21e6461956c08fe4bea579210 | 2022-07-13T16:36:11.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers"
] | audio-classification | false | Hamzaaa | null | Hamzaaa/wav2vec2-base-finetuned-Tess | 54 | null | transformers | 5,874 | Entry not found |
Danish-summarisation/dansum-mt5-base-v1 | 3be4ac3aa9c032fd41c4d272165357635b164d09 | 2022-07-07T12:06:30.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"da",
"arxiv:1804.11283",
"transformers",
"summarization",
"autotrain_compatible"
] | summarization | false | Danish-summarisation | null | Danish-summarisation/dansum-mt5-base-v1 | 54 | null | transformers | 5,875 | ---
language:
- da
tags:
- summarization
widget:
- text: "De strejkende SAS-piloter melder sig nu klar til gøre en undtagelse fra strejken for at hente strandede chartergæster hjem fra flere ferieområder.
Undtagelsen skal gælde nogle uger frem, men piloterne vil under ingen omstændigheder have nye gæster med sig ned til de samme destinationer.
Det skriver SAS Pilot Group i en pressemeddelelse.
- Vi forstår, at det er uundgåeligt, at vores passagerer bliver ramt af strejken. Men vi piloter er altid fokuseret på at opføre os ansvarligt med passagersikkerheden som højeste prioritet, siger Martin Lindgren, der er formand for SAS Pilot Group i Norden.
Men for at hjælpe strandede gæster kræver de strejkende piloter samtidig, at SAS' trækker sin lockout af piloterne tilbage.
Samtidig ser SAS Pilot Group det som en forudsætning, at SAS ikke får hjælp fra andre flyselskaber til at flyve nye passagerer til de samme destinationer, som piloterne tilbyder at flyve gæster hjem fra, skriver fagforeningen."
example_title: "Example 1"
- text: "Mere end 21.000 krigsforbrydelser. Så mange efterforsker de ukrainske myndigheder lige nu ifølge den ukrainske rigsadvokat, Iryna Venediktova.
Hun oplyser til britiske BBC, at der bliver anmeldt mellem 200 og 300 nye sager om dagen.
Forbrydelserne er ifølge Venediktova svære at efterforske, fordi det kan være vanskeligt at komme frem til de relevante områder og mennesker.
Men hun understreger overfor BBC, at russiske soldater, der har dræbt, tortureret eller voldtaget civile, bør forstå, at det kun er et spørgsmål om tid, før de alle vil komme for retten.
Rusland er blevet anklaget for en lang række krigsforbrydelser, siden landet invaderede Ukraine den 24. februar, men afviser alle anklager."
example_title: "Example 2"
- text: "Det nye studie Cognitive Science på Aarhus Universitet, som i år havde Østjyllands højeste adgangskrav på 11,7 i karaktergennemsnit, udklækker det første hold bachelorer til sommer.
Men når de skal læse videre på kandidaten må de til udlandet, hvis ikke de vil skifte til et andet fag. Aarhus Universitet kan nemlig ikke nå at oprette en kandidat i Cognitive Science til næste sommer, hvor det første hold bachelorer er færdige.
Det rammer blandt andre Julie Sohn, der startede på uddannelsen i sommeren 2015, og derfor kun mangler et år, før hun er bachelor.
- Jeg synes, at det er ærgerligt, at vi som nye studerende på et populært studie ikke kan tage en kandidat i Danmark, siger hun.
Bacheloruddannelsen i Cognitive Science blev oprettet af Aarhus Universitet i 2015, og uddannelsen kombinerer viden om menneskelig adfærd med avanceret statistik. Da der endnu ikke er oprettet en kandidatuddannelse indenfor dette område, har Julie Sohn i stedet mulighed for at læse en kandidatgrad i for eksempel informationsvidenskab.
Hun vil dog hellere fortsætte på Cognitive Science, og derfor overvejer hun nu at læse videre i udlandet.
- Det ser ud til, at det er den eneste mulighed, hvis man gerne vil læse videre på noget, der faktisk passer ind til vores studie, siger hun.
Nye regler giver forsinkelse
På Aarhus Universitet havde man håbet på at have kandidatuddannelsen klar, når det første hold bachelorer bliver færdige til sommer. Arbejdet er dog blevet forsinket, fordi der er kommet nye regler for, hvornår man må oprette en uddannelse, fortæller Niels Lehmann, prodekan på fakultetet Arts, som Cognitive Science hører under.
Det er nogle meget dygtige studerende, der kommer ind på uddannelsen, og det er klart, at de i et vist omfang vil orientere sig mod udlandet, hvor man så kan forestille sig, at de bider sig fast.
NIELS LEHMANN, PRODEKAN, AARHUS UNIVERSITET
Tidligere skulle Danmarks Akkrediteringsinstitution se alle nye uddannelser efter i sømmene for at sikre, at kvaliteten var i orden. Nu skal uddannelsesinstitutionerne selv stå for det kvalitetstjek.
Men det tjek har Aarhus Universitet endnu ikke fået grønt lys til selv at udføre, fortæller prodekanen.
- Vi ville meget gerne have kunnet nå at få et udbud på kandidaten i gang i 2018, men så længe man er under institutionsakkreditering, så kan man ikke ansøge om nye uddannelser, siger han.
Det er endnu usikkert, hvornår Aarhus Universitet kan oprette kandidaten i Cognitive Science. Hvis de får alle de nødvendige godkendelser, kan den tidligst være klar i 2019.
Prodekan Niels Lehmann frygter, at Danmark kommer til at miste nogle af landets skarpeste studerende, hvis de er nødt til at rejse til udlandet for at gøre deres uddannelse færdig.
- Det er nogle meget, meget dygtige studerende, der kommer ind på denne uddannelse, og det er klart, at de i et vist omfang vil orientere sig mod udlandet, hvor man så kan forestille sig, at de bider sig fast, siger han.
Hos Danmarks Akkrediteringsinstitution forstår man godt, at universitets ansatte og studenrede ærgrer sig.
- Jeg kan godt forstå, at Aarhus Universitet ærgrer sig over, at det trækker ud, og at der går noget tid, før man får mulighed for at oprette nye uddannelser, og at man ikke har fået den genvej til at oprette nye uddannelser, som ville være fuldt med, hvis man havde opnået en positiv institutionsakkreditering, siger kommunikationsansvarlig Daniel Sebastian Larsen.
I år var Cognitive Science i Aarhus den uddannelse i Danmark, der havde det fjerde højeste karakterkrav - det højeste var 'AP Graduate in Marketing Management' på Erhvervsakademi Sjælland med et krav på 12,3."
example_title: "Example 3"
---
# mT5-base fine-tuned for News article Summarisation ✏️🧾
[Google's mT5](https://aclanthology.org/2021.naacl-main.41/) for **summarisation** downstream task.
# Model summary
This repository contains a model for Danish abstractive summarisation of news articles. The summariser is based on a language-specific mT5-base, where the vocabulary is condensed to include tokens used in Danish and English. The model is fine-tuned using an abstractive subset of the DaNewsroom dataset (Varab & Schluter, 2020), according to the binned density categories employed in Newsroom (Grusky et al., 2019).
# References
Grusky, M., Naaman, M., & Artzi, Y. (2018). Newsroom: A Dataset of 1.3 Million Summaries with Diverse Extractive Strategies. ArXiv:1804.11283 [Cs]. http://arxiv.org/abs/1804.11283
Varab, D., & Schluter, N. (2020). DaNewsroom: A Large-scale Danish Summarisation Dataset. Proceedings of the 12th Language Resources and Evaluation Conference, 6731–6739. https://aclanthology.org/2020.lrec-1.831
|
sam34738/bert-hinglish | 42532573f07e417493228ed8607c8305d81ee854 | 2022-07-08T00:00:58.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | sam34738 | null | sam34738/bert-hinglish | 54 | null | transformers | 5,876 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-hinglish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-hinglish
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3557 | 1.0 | 460 | 0.7714 |
| 0.6349 | 2.0 | 920 | 0.5475 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
simecek/DNADebertaBPE10k | b8fec7d20681a8860542a645eadd01a28b4c0b31 | 2022-07-15T06:43:43.000Z | [
"pytorch",
"tensorboard",
"deberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | simecek | null | simecek/DNADebertaBPE10k | 54 | null | transformers | 5,877 | ---
tags:
- generated_from_trainer
model-index:
- name: DNADebertaBPE10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DNADebertaBPE10k
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 4.7323
- eval_runtime: 283.5074
- eval_samples_per_second: 394.223
- eval_steps_per_second: 24.641
- epoch: 7.43
- step: 116731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
hassan4830/distil-bert-uncased-finetuned-english | c67f0b49bc66d1e758e35b5449b416c367ae80e5 | 2022-07-21T14:29:38.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"license:afl-3.0"
] | text-classification | false | hassan4830 | null | hassan4830/distil-bert-uncased-finetuned-english | 54 | 1 | transformers | 5,878 | ---
license: afl-3.0
---
distilbert Binary Text Classifier
This distilbert based text classification model trained on imdb dataset performs binary sentiment classification on any given sentence.
The model has been fine tuned for better results in manageable time frames.
LABEL0 - Negative
LABEL1 - Positive |
AlekseyDorkin/xlm-roberta-en-ru-emoji | dc5f644204d24cfde0619a74ce5b729edb014fe4 | 2021-11-15T18:08:03.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"en",
"ru",
"dataset:tweet_eval",
"transformers"
] | text-classification | false | AlekseyDorkin | null | AlekseyDorkin/xlm-roberta-en-ru-emoji | 53 | null | transformers | 5,879 | ---
language:
- en
- ru
datasets:
- tweet_eval
model_index:
- name: xlm-roberta-en-ru-emoji
results:
- task:
name: Sentiment Analysis
type: sentiment-analysis
dataset:
name: Tweet Eval
type: tweet_eval
args: emoji
widget:
- text: "Отлично!"
- text: "Awesome!"
- text: "lol"
---
# xlm-roberta-en-ru-emoji
- Problem type: Multi-class Classification |
CenIA/albert-xxlarge-spanish-finetuned-qa-mlqa | f54896f961e419cb0dcb9e08d9b741448777d79e | 2022-02-14T16:59:14.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/albert-xxlarge-spanish-finetuned-qa-mlqa | 53 | null | transformers | 5,880 | Entry not found |
DTAI-KULeuven/mbert-corona-tweets-belgium-topics | 92031a69cf948386540c9429199162b7c0b4fd98 | 2021-05-18T18:07:37.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"multilingual",
"arxiv:2104.09947",
"transformers",
"Dutch",
"French",
"English",
"Tweets",
"Topic classification"
] | text-classification | false | DTAI-KULeuven | null | DTAI-KULeuven/mbert-corona-tweets-belgium-topics | 53 | null | transformers | 5,881 | ---
language: "multilingual"
tags:
- Dutch
- French
- English
- Tweets
- Topic classification
widget:
- text: "I really can't wait for this lockdown to be over and go back to waking up early."
---
# Measuring Shifts in Attitudes Towards COVID-19 Measures in Belgium Using Multilingual BERT
[Blog post »](https://people.cs.kuleuven.be/~pieter.delobelle/attitudes-towards-covid-19-measures/?utm_source=huggingface&utm_medium=social&utm_campaign=corona_tweets) · [paper »](http://arxiv.org/abs/2104.09947)
We categorized several months worth of these Tweets by topic (government COVID measure) and opinion expressed. Below is a timeline of the relative number of Tweets on the curfew topic (middle) and the fraction of those Tweets that find the curfew too strict, too loose, or a suitable measure (bottom), with the number of daily cases in Belgium to give context on the pandemic situation (top).

Models used in this paper are on HuggingFace:
- https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support
- https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-topics
|
Davlan/distilbert-base-multilingual-cased-masakhaner | 319d81df340d7006aeaafd5ff30f0ff1e47abdeb | 2022-06-27T10:57:26.000Z | [
"pytorch",
"tf",
"distilbert",
"token-classification",
"ha",
"ig",
"rw",
"lg",
"luo",
"pcm",
"sw",
"wo",
"yo",
"multilingual",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"autotrain_compatible"
] | token-classification | false | Davlan | null | Davlan/distilbert-base-multilingual-cased-masakhaner | 53 | 1 | transformers | 5,882 | Hugging Face's logo
---
language:
- ha
- ig
- rw
- lg
- luo
- pcm
- sw
- wo
- yo
- multilingual
datasets:
- masakhaner
---
# bert-base-multilingual-cased-masakhaner
## Model description
**distilbert-base-multilingual-cased-masakhaner** is the first **Named Entity Recognition** model for 9 African languages (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) based on a fine-tuned BERT base model. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *distilbert-base-multilingual-cased* model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/distilbert-base-multilingual-cased-masakhaner")
model = AutoModelForTokenClassification.from_pretrained("Davlan/distilbert-base-multilingual-cased-masakhaner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on 9 African NER datasets (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original MasakhaNER paper](https://arxiv.org/abs/2103.11811) which trained & evaluated the model on MasakhaNER corpus.
## Eval results on Test set (F-score)
language|F1-score
-|-
hau |88.88
ibo |84.87
kin |74.19
lug |78.43
luo |73.32
pcm |87.98
swa |86.20
wol |64.67
yor |78.10
### BibTeX entry and citation info
```
@article{adelani21tacl,
title = {Masakha{NER}: Named Entity Recognition for African Languages},
author = {David Ifeoluwa Adelani and Jade Abbott and Graham Neubig and Daniel D'souza and Julia Kreutzer and Constantine Lignos and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and Israel Abebe Azime and Shamsuddeen Muhammad and Chris Chinenye Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and Jesujoba Alabi and Seid Muhie Yimam and Tajuddeen Gwadabe and Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and Verrah Otiende and Iroro Orife and Davis David and Samba Ngom and Tosin Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and Chiamaka Chukwuneke and Nkiruka Odu and Eric Peter Wairagala and Samuel Oyerinde and Clemencia Siro and Tobius Saul Bateesa and Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and Ayodele Awokoya and Mouhamadane MBOUP and Dibora Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and Thierno Ibrahima DIOP and Abdoulaye Diallo and Adewale Akinfaderin and Tendai Marengereke and Salomey Osei},
journal = {Transactions of the Association for Computational Linguistics (TACL)},
month = {},
url = {https://arxiv.org/abs/2103.11811},
year = {2021}
}
```
|
JonatanGk/roberta-base-bne-finetuned-cyberbullying-spanish | 39adc1e0e25c86c3912534ad2eba4ec0084779ba | 2021-10-10T09:47:45.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"es",
"transformers",
"spanish"
] | text-classification | false | JonatanGk | null | JonatanGk/roberta-base-bne-finetuned-cyberbullying-spanish | 53 | 3 | transformers | 5,883 | ---
language: es
tags:
- "spanish"
metrics:
- accuracy
widget:
- text: "Eres mas pequeño que un pitufo!"
- text: "Eres muy feo!"
- text: "Odio tu forma de hablar!"
- text: "Eres tan fea que cuando eras pequeña te echaban de comer por debajo de la puerta."
---
# roberta-base-bne-finetuned-ciberbullying-spanish
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the dataset generated scrapping all social networks (Twitter, Youtube ...) to detect ciberbullying on Spanish.
It achieves the following results on the evaluation set:
- Loss: 0.1657
- Accuracy: 0.9607
## Training and evaluation data
I use the concatenation from multiple datasets generated scrapping social networks (Twitter,Youtube,Discord...) to fine-tune this model. The total number of sentence pairs is above 360k sentences.
## Training procedure
<details>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.1512 | 1.0 | 22227 | 0.9501 | 0.1418 |
| 0.1253 | 2.0 | 44454 | 0.9567 | 0.1499 |
| 0.0973 | 3.0 | 66681 | 0.9594 | 0.1397 |
| 0.0658 | 4.0 | 88908 | 0.9607 | 0.1657 |
</details>
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
model_path = "JonatanGk/roberta-base-bne-finetuned-ciberbullying-spanish"
bullying_analysis = pipeline("text-classification", model=model_path, tokenizer=model_path)
bullying_analysis(
"Desde que te vi me enamoré de ti."
)
# Output:
[{'label': 'Not_bullying', 'score': 0.9995710253715515}]
bullying_analysis(
"Eres tan fea que cuando eras pequeña te echaban de comer por debajo de la puerta."
)
# Output:
[{'label': 'Bullying', 'score': 0.9918262958526611}]
```
[](https://colab.research.google.com/github/JonatanGk/Shared-Colab/blob/master/Cyberbullying_detection_(SPANISH).ipynb)
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
> Special thx to [Manuel Romero/@mrm8488](https://huggingface.co/mrm8488) as my mentor & R.C.
> Created by [Jonatan Luna](https://JonatanGk.github.io) | [LinkedIn](https://www.linkedin.com/in/JonatanGk/) |
TransQuest/microtransquest-en_de-it-nmt | 61842e882af256b14dde831809b66961947c1753 | 2021-06-04T08:19:43.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"en-de",
"transformers",
"Quality Estimation",
"microtransquest",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | TransQuest | null | TransQuest/microtransquest-en_de-it-nmt | 53 | null | transformers | 5,884 | ---
language: en-de
tags:
- Quality Estimation
- microtransquest
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
from transquest.algo.word_level.microtransquest.run_model import MicroTransQuestModel
import torch
model = MicroTransQuestModel("xlmroberta", "TransQuest/microtransquest-en_de-it-nmt", labels=["OK", "BAD"], use_cuda=torch.cuda.is_available())
source_tags, target_tags = model.predict([["if not , you may not be protected against the diseases . ", "ja tā nav , Jūs varat nepasargāt no slimībām . "]])
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
TransQuest/microtransquest-en_zh-wiki | 842323880c2a74f59bdb3cd56efdd22ea1a7e3f4 | 2021-06-04T08:22:58.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"en-zh",
"transformers",
"Quality Estimation",
"microtransquest",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | TransQuest | null | TransQuest/microtransquest-en_zh-wiki | 53 | null | transformers | 5,885 | ---
language: en-zh
tags:
- Quality Estimation
- microtransquest
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
from transquest.algo.word_level.microtransquest.run_model import MicroTransQuestModel
import torch
model = MicroTransQuestModel("xlmroberta", "TransQuest/microtransquest-en_zh-wiki", labels=["OK", "BAD"], use_cuda=torch.cuda.is_available())
source_tags, target_tags = model.predict([["if not , you may not be protected against the diseases . ", "ja tā nav , Jūs varat nepasargāt no slimībām . "]])
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
allenai/macaw-answer-11b | e81db93a1ceda31aebc7664f70a87f728d34b22c | 2021-09-21T15:59:24.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | allenai | null | allenai/macaw-answer-11b | 53 | 7 | transformers | 5,886 | ---
language: en
widget:
- text: $answer$ ; $mcoptions$ ; $question$ = What is the color of a cloudy sky?
license: apache-2.0
---
# macaw-answer-11b
## Model description
Macaw (<b>M</b>ulti-<b>a</b>ngle <b>c</b>(q)uestion <b>a</b>ns<b>w</b>ering) is a ready-to-use model capable of
general question answering,
showing robustness outside the domains it was trained on. It has been trained in "multi-angle" fashion,
which means it can handle a flexible set of input and output "slots"
(question, answer, multiple-choice options, context, and explanation) .
Macaw was built on top of [T5](https://github.com/google-research/text-to-text-transfer-transformer) and comes in
three sizes: [macaw-11b](https://huggingface.co/allenai/macaw-11b), [macaw-3b](https://huggingface.co/allenai/macaw-3b),
and [macaw-large](https://huggingface.co/allenai/macaw-large), as well as an answer-focused version featured on
various leaderboards [macaw-answer-11b](https://huggingface.co/allenai/macaw-answer-11b).
See https://github.com/allenai/macaw for more details. |
baykenney/bert-large-gpt2detector-random | 38c2d1a930bfac6b759459a9d270af6374f90b23 | 2021-05-19T12:14:50.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | baykenney | null | baykenney/bert-large-gpt2detector-random | 53 | null | transformers | 5,887 | Entry not found |
chitra/distilbert-negation | 90697aed80297d8fd77b880dea8596816c03776c | 2022-02-09T06:59:47.000Z | [
"pytorch",
"tf",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | chitra | null | chitra/distilbert-negation | 53 | null | transformers | 5,888 | Entry not found |
facebook/dino-vits16 | bf35519e78e06dbb878dd88eff1e12fc1c3f8576 | 2021-08-25T17:38:44.000Z | [
"pytorch",
"vit",
"feature-extraction",
"dataset:imagenet-1k",
"arxiv:2010.11929",
"arxiv:2104.14294",
"transformers",
"dino",
"license:apache-2.0"
] | feature-extraction | false | facebook | null | facebook/dino-vits16 | 53 | null | transformers | 5,889 | ---
license: apache-2.0
tags:
- dino
datasets:
- imagenet-1k
---
# Vision Transformer (small-sized model, patch size 16) trained using DINO
Vision Transformer (ViT) model trained using the DINO method. It was introduced in the paper [Emerging Properties in Self-Supervised Vision Transformers](https://arxiv.org/abs/2010.11929) by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in [this repository](https://github.com/facebookresearch/dino).
Disclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not include any fine-tuned heads.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import ViTFeatureExtractor, ViTModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('facebook/dino-vits16')
model = ViTModel.from_pretrained('facebook/dino-vits16')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2104-14294,
author = {Mathilde Caron and
Hugo Touvron and
Ishan Misra and
Herv{\'{e}} J{\'{e}}gou and
Julien Mairal and
Piotr Bojanowski and
Armand Joulin},
title = {Emerging Properties in Self-Supervised Vision Transformers},
journal = {CoRR},
volume = {abs/2104.14294},
year = {2021},
url = {https://arxiv.org/abs/2104.14294},
archivePrefix = {arXiv},
eprint = {2104.14294},
timestamp = {Tue, 04 May 2021 15:12:43 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-14294.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
fmikaelian/flaubert-base-uncased-squad | 39a3c87a64ed520c20c6e6eea9e97bb6bdc09c69 | 2020-12-11T21:40:15.000Z | [
"pytorch",
"flaubert",
"question-answering",
"fr",
"transformers",
"autotrain_compatible"
] | question-answering | false | fmikaelian | null | fmikaelian/flaubert-base-uncased-squad | 53 | 1 | transformers | 5,890 | ---
language: fr
---
# flaubert-base-uncased-squad
## Description
A baseline model for question-answering in french ([flaubert](https://github.com/getalp/Flaubert) model fine-tuned on [french-translated SQuAD 1.1 dataset](https://github.com/Alikabbadj/French-SQuAD))
## Training hyperparameters
```shell
python3 ./examples/question-answering/run_squad.py \
--model_type flaubert \
--model_name_or_path flaubert-base-uncased \
--do_train \
--do_eval \
--do_lower_case \
--train_file SQuAD-v1.1-train_fr_ss999_awstart2_net.json \
--predict_file SQuAD-v1.1-dev_fr_ss999_awstart2_net.json \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir output \
--per_gpu_eval_batch_size=3 \
--per_gpu_train_batch_size=3
```
## Evaluation results
```shell
{"f1": 68.66174806561969, "exact_match": 49.299692063176714}
```
## Usage
```python
from transformers import pipeline
nlp = pipeline('question-answering', model='fmikaelian/flaubert-base-uncased-squad', tokenizer='fmikaelian/flaubert-base-uncased-squad')
nlp({
'question': "Qui est Claude Monet?",
'context': "Claude Monet, né le 14 novembre 1840 à Paris et mort le 5 décembre 1926 à Giverny, est un peintre français et l’un des fondateurs de l'impressionnisme."
})
``` |
google/tapas-tiny-finetuned-sqa | 9e62729ba223d47b8812e264d621e719d7fdaaf5 | 2021-11-29T13:08:47.000Z | [
"pytorch",
"tf",
"tapas",
"table-question-answering",
"en",
"dataset:msr_sqa",
"arxiv:2004.02349",
"arxiv:2010.00571",
"transformers",
"license:apache-2.0"
] | table-question-answering | false | google | null | google/tapas-tiny-finetuned-sqa | 53 | null | transformers | 5,891 | ---
language: en
tags:
- tapas
license: apache-2.0
datasets:
- msr_sqa
---
# TAPAS tiny model fine-tuned on Sequential Question Answering (SQA)
This model has 2 versions which can be used. The default version corresponds to the `tapas_sqa_inter_masklm_tiny_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253). It uses relative position embeddings (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is:
- `no_reset`, which corresponds to `tapas_sqa_inter_masklm_tiny` (intermediate pre-training, absolute position embeddings).
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Results on SQA - Dev Accuracy
Size | Reset | Dev Accuracy | Link
-------- | --------| -------- | ----
LARGE | noreset | 0.7223 | [tapas-large-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/no_reset)
LARGE | reset | 0.7289 | [tapas-large-finetuned-sqa](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/main)
BASE | noreset | 0.6737 | [tapas-base-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/no_reset)
BASE | reset | 0.6874 | [tapas-base-finetuned-sqa](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/main)
MEDIUM | noreset | 0.6464 | [tapas-medium-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/no_reset)
MEDIUM | reset | 0.6561 | [tapas-medium-finetuned-sqa](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/main)
SMALL | noreset | 0.5876 | [tapas-small-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/no_reset)
SMALL | reset | 0.6155 | [tapas-small-finetuned-sqa](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/main)
MINI | noreset | 0.4574 | [tapas-mini-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/no_reset)
MINI | reset | 0.5148 | [tapas-mini-finetuned-sqa](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/main))
**TINY** | **noreset** | **0.2004** | [tapas-tiny-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/no_reset)
**TINY** | **reset** | **0.2375** | [tapas-tiny-finetuned-sqa](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/main)
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head on top of the pre-trained model, and then jointly
train this randomly initialized classification head with the base model on SQA.
## Intended uses & limitations
You can use this model for answering questions related to a table in a conversational set-up.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Question [SEP] Flattened table [SEP]
```
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 200,000 steps with maximum sequence length 512 and batch size of 128.
In this setup, fine-tuning takes around 20 hours. The optimizer used is Adam with a learning rate of 1.25e-5, and a warmup ratio
of 0.2. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the
`select_one_column` parameter of `TapasConfig`. See also table 12 of the [original paper](https://arxiv.org/abs/2004.02349).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@InProceedings{iyyer2017search-based,
author = {Iyyer, Mohit and Yih, Scott Wen-tau and Chang, Ming-Wei},
title = {Search-based Neural Structured Learning for Sequential Question Answering},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics},
year = {2017},
month = {July},
abstract = {Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We collect a dataset of 6,066 question sequences that inquire about semi-structured tables from Wikipedia, with 17,553 question-answer pairs in total. To solve this sequential question answering task, we propose a novel dynamic neural semantic parsing framework trained using a weakly supervised reward-guided search. Our model effectively leverages the sequential context to outperform state-of-the-art QA systems that are designed to answer highly complex questions.},
publisher = {Association for Computational Linguistics},
url = {https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/},
}
``` |
jcblaise/gpt2-tagalog | a93a67a87de37c526fb3aa414b2c4cae166f4c79 | 2021-11-11T06:12:25.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"tl",
"transformers",
"tagalog",
"filipino",
"license:gpl-3.0"
] | text-generation | false | jcblaise | null | jcblaise/gpt2-tagalog | 53 | null | transformers | 5,892 | ---
language: tl
tags:
- gpt2
- tagalog
- filipino
license: gpl-3.0
inference: false
---
# GPT-2 Tagalog
The Tagalog GPT-2 model used to benchmark our fake news detection system Cruz et al. (2020). We make available an improved version of our GPT-2 model trained with NewsPH in addition to WikiText-TL-39.
## Limitations and Bias
The model was trained with two language modeling datasets for Tagalog:
* **WikiText-TL-39**, which is sourced from a dump of Tagalog WikiPedia.
* **NewsPH**, which is a dump of news articles from all available mainstream news outlets in the Philippines.
Due to the source of the training data, generated sentences out-of-the-box may sound and read like actual news articles, possessing the common tone and style of these works. While these may *look* like news articles, these are *not* news articles, and should not be read, understood, published, or shared as one. Language models do not inherently distinguish factual statements from non-factual ones, and as such, we discourage use of the model in systems and use-cases where the generated output is required to be true.
As this model is currently a prototype, bias was not thoroughly studied. Models inherit biases that are present in the data that they are trained with. Thing such as frequency of association of gender to occupation can induce certain biases in the model that will remain undetected unless thoroughly tested. As with the original GPT-2 model, we recommend that this model not be deployed or used in systems that interact with humans unless thorough study of potential biases is carried out.
We release this model with the intent that it may aid in the advancement of Filipino NLP, and that researchers and engineers who are interested in applying their work to the language may have a baseline model to use. For future work, in addition to the study of inherent bias, we mainly look into improving the quality of our models. As this is a prototype, a large-scale corpora was not used to train it. We plan to train larger GPT-2 models with larger corpora in the future.
## Citations
```bibtex
@inproceedings{localization2020cruz,
title={{Localization of Fake News Detection via Multitask Transfer Learning}},
author={Cruz, Jan Christian Blaise and Tan, Julianne Agatha and Cheng, Charibeth},
booktitle={Proceedings of The 12th Language Resources and Evaluation Conference},
pages={2589--2597},
year={2020},
url={https://www.aclweb.org/anthology/2020.lrec-1.315}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected] |
m3hrdadfi/wav2vec2-base-100k-gtzan-music-genres | caf978c8328a2cec4229b3eb0b41b162e379caa1 | 2021-07-06T10:26:27.000Z | [
"pytorch",
"wav2vec2",
"transformers",
"audio",
"automatic-speech-recognition",
"audio-classification"
] | automatic-speech-recognition | false | m3hrdadfi | null | m3hrdadfi/wav2vec2-base-100k-gtzan-music-genres | 53 | 4 | transformers | 5,893 | ---
tags:
- audio
- automatic-speech-recognition
- audio-classification
---
# Music Genre Classification using Wav2Vec 2.0
## How to use
### Requirements
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
```
### Prediction
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchaudio
from transformers import AutoConfig, Wav2Vec2FeatureExtractor
import librosa
import IPython.display as ipd
import numpy as np
import pandas as pd
```
```python
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_name_or_path = "m3hrdadfi/wav2vec2-base-100k-voxpopuli-gtzan-music"
config = AutoConfig.from_pretrained(model_name_or_path)
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path)
sampling_rate = feature_extractor.sampling_rate
model = Wav2Vec2ForSpeechClassification.from_pretrained(model_name_or_path).to(device)
```
```python
def speech_file_to_array_fn(path, sampling_rate):
speech_array, _sampling_rate = torchaudio.load(path)
resampler = torchaudio.transforms.Resample(_sampling_rate)
speech = resampler(speech_array).squeeze().numpy()
return speech
def predict(path, sampling_rate):
speech = speech_file_to_array_fn(path, sampling_rate)
inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True)
inputs = {key: inputs[key].to(device) for key in inputs}
with torch.no_grad():
logits = model(**inputs).logits
scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0]
outputs = [{"Label": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)]
return outputs
```
```python
path = "genres_original/disco/disco.00067.wav"
outputs = predict(path, sampling_rate)
```
```bash
[
{'Label': 'blues', 'Score': '0.0%'},
{'Label': 'classical', 'Score': '0.0%'},
{'Label': 'country', 'Score': '0.0%'},
{'Label': 'disco', 'Score': '99.8%'},
{'Label': 'hiphop', 'Score': '0.0%'},
{'Label': 'jazz', 'Score': '0.0%'},
{'Label': 'metal', 'Score': '0.0%'},
{'Label': 'pop', 'Score': '0.0%'},
{'Label': 'reggae', 'Score': '0.0%'},
{'Label': 'rock', 'Score': '0.0%'}
]
```
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
| label | precision | recall | f1-score | support |
|:------------:|:---------:|:------:|:--------:|:-------:|
| blues | 0.792 | 0.950 | 0.864 | 20 |
| classical | 0.864 | 0.950 | 0.905 | 20 |
| country | 0.812 | 0.650 | 0.722 | 20 |
| disco | 0.778 | 0.700 | 0.737 | 20 |
| hiphop | 0.933 | 0.700 | 0.800 | 20 |
| jazz | 1.000 | 0.850 | 0.919 | 20 |
| metal | 0.783 | 0.900 | 0.837 | 20 |
| pop | 0.917 | 0.550 | 0.687 | 20 |
| reggae | 0.543 | 0.950 | 0.691 | 20 |
| rock | 0.611 | 0.550 | 0.579 | 20 |
| accuracy | 0.775 | 0.775 | 0.775 | 0 |
| macro avg | 0.803 | 0.775 | 0.774 | 200 |
| weighted avg | 0.803 | 0.775 | 0.774 | 200 |
## Questions?
Post a Github issue from [HERE](https://github.com/m3hrdadfi/soxan/issues). |
mys/electra-base-turkish-cased-ner | d1aeabc657ef56b5ea68268b94ccfc34b36b0fcf | 2020-12-11T21:56:51.000Z | [
"pytorch",
"tf",
"electra",
"token-classification",
"tr",
"transformers",
"autotrain_compatible"
] | token-classification | false | mys | null | mys/electra-base-turkish-cased-ner | 53 | 2 | transformers | 5,894 | ---
language: tr
---
## What is this
A NER model for Turkish with 48 categories trained on the dataset [Shrinked TWNERTC Turkish NER Data](https://www.kaggle.com/behcetsenturk/shrinked-twnertc-turkish-ner-data-by-kuzgunlar) by Behçet Şentürk, which is itself a filtered and cleaned version of the following automatically labeled dataset:
> Sahin, H. Bahadir; Eren, Mustafa Tolga; Tirkaz, Caglar; Sonmez, Ozan; Yildiz, Eray (2017), “English/Turkish Wikipedia Named-Entity Recognition and Text Categorization Dataset”, Mendeley Data, v1 http://dx.doi.org/10.17632/cdcztymf4k.1
## Backbone model
The backbone model is [electra-base-turkish-cased-discriminator](https://huggingface.co/dbmdz/electra-base-turkish-cased-discriminator), and I finetuned it for token classification.
I'm continuing to figure out if it is possible to improve accuracy with this dataset, but it is already usable for non-critic applications. You can reach out to me on [Twitter](https://twitter.com/myusufsarigoz) for discussions and issues.
I will also release a notebook to finetune NER models with Shrinked TWNERTC as well as sample inference code to demonstrate what's possible with this model.
|
nickmuchi/distilroberta-finetuned-financial-text-classification | e66b163632379b0f926c8aec99543d48fe83c02a | 2022-04-06T06:05:03.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"en",
"dataset:financial_phrasebank",
"dataset:Kaggle Self label",
"dataset:nickmuchi/financial-classification",
"transformers",
"financial-sentiment-analysis",
"sentiment-analysis",
"sentence_50agree",
"generated_from_trainer",
"financial",
"stocks",
"sentiment",
"license:apache-2.0",
"model-index"
] | text-classification | false | nickmuchi | null | nickmuchi/distilroberta-finetuned-financial-text-classification | 53 | null | transformers | 5,895 | ---
license: apache-2.0
language: "en"
tags:
- financial-sentiment-analysis
- sentiment-analysis
- sentence_50agree
- generated_from_trainer
- financial
- stocks
- sentiment
datasets:
- financial_phrasebank
- Kaggle Self label
- nickmuchi/financial-classification
metrics:
- f1
widget:
- text: "The USD rallied by 10% last night"
example_title: "Bullish Sentiment"
- text: "Covid-19 cases have been increasing over the past few months impacting earnings for global firms"
example_title: "Bearish Sentiment"
- text: "the USD has been trending lower"
example_title: "Mildly Bearish Sentiment"
model-index:
- name: distilroberta-finetuned-finclass
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: finance
args: sentence_50agree
metrics:
- type: F1
name: F1
value: 0.8835
- type: accuracy
name: accuracy
value: 0.89
---
# distilroberta-finetuned-financial-text-classification
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the sentence_50Agree [financial-phrasebank + Kaggle Dataset](https://huggingface.co/datasets/nickmuchi/financial-classification), a dataset consisting of 4840 Financial News categorised by sentiment (negative, neutral, positive). The Kaggle dataset includes Covid-19 sentiment data and can be found here: [sentiment-classification-selflabel-dataset](https://www.kaggle.com/percyzheng/sentiment-classification-selflabel-dataset).
It achieves the following results on the evaluation set:
- Loss: 0.4463
- F1: 0.8835
## Model description
Model determines the financial sentiment of given text. Given the unbalanced distribution of the class labels, the weights were adjusted to pay attention to the less sampled labels which should increase overall performance. The Covid dataset was added in order to enrich the model, given most models have not been trained on the impact of Covid-19 on earnings or markets.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7309 | 1.0 | 72 | 0.3671 | 0.8441 |
| 0.3757 | 2.0 | 144 | 0.3199 | 0.8709 |
| 0.3054 | 3.0 | 216 | 0.3096 | 0.8678 |
| 0.2229 | 4.0 | 288 | 0.3776 | 0.8390 |
| 0.1744 | 5.0 | 360 | 0.3678 | 0.8723 |
| 0.1436 | 6.0 | 432 | 0.3728 | 0.8758 |
| 0.1044 | 7.0 | 504 | 0.4116 | 0.8744 |
| 0.0931 | 8.0 | 576 | 0.4148 | 0.8761 |
| 0.0683 | 9.0 | 648 | 0.4423 | 0.8837 |
| 0.0611 | 10.0 | 720 | 0.4463 | 0.8835 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large | 96633c569d2e35adf610f2deead8d8300953f8c7 | 2021-06-20T19:02:55.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nreimers | null | nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large | 53 | 3 | transformers | 5,896 | # MiniLMv2
This is a MiniLMv2 model from: [https://github.com/microsoft/unilm](https://github.com/microsoft/unilm/tree/master/minilm) |
beomi/kcelectra-v2022-dev | 42204d40600422c885cbecef412ee2aa378316ae | 2022-03-24T05:51:11.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers",
"license:mit"
] | null | false | beomi | null | beomi/kcelectra-v2022-dev | 53 | 1 | transformers | 5,897 | ---
license: mit
---
|
Jeevesh8/feather_berts_95 | 3ddf7cf9cea53386c95d45a820dbcbaad45b27af | 2022-04-20T13:55:22.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/feather_berts_95 | 53 | null | transformers | 5,898 | Entry not found |
imxly/t5-copy | b8511ae016e65773141b478a8da512ef99c69d36 | 2022-05-05T09:08:00.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | imxly | null | imxly/t5-copy | 53 | null | transformers | 5,899 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.