modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
kingabzpro/Helsinki-NLP-opus-yor-mul-en | c4b80c5880959550552c8e2c9b639df1fe5bb10c | 2021-08-03T08:43:00.000Z | [
"pytorch",
"marian",
"text2text-generation",
"Yorùbá",
"dataset:AI4D-Africa - Yorùbá Machine Translation Challenge",
"transformers",
"text",
"machine-translation",
"language-translation",
"seq2seq",
"helsinki-nlp",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | kingabzpro | null | kingabzpro/Helsinki-NLP-opus-yor-mul-en | 12 | 1 | transformers | 10,600 | ---
language: Yorùbá
datasets:
- AI4D-Africa - Yorùbá Machine Translation Challenge
tags:
- text
- machine-translation
- language-translation
- seq2seq
- helsinki-nlp
license: apache-2.0
metrics:
- ROUGE
---
## Predicting English Translation
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# Loading tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("kingabzpro/Helsinki-NLP-opus-yor-mul-en")
model = AutoModelForSeq2SeqLM.from_pretrained("kingabzpro/Helsinki-NLP-opus-yor-mul-en").to('cuda')
# Prediction
a = model.generate(**tokenizer.prepare_seq2seq_batch('Nínú ìpè kan lẹ́yìn ìgbà náà, wọ́n sọ fún aṣojú iléeṣẹ́ BlaBlaCar pé ètò náà ti yí padà, pé',return_tensors='pt').to('cuda'))
text = tokenizer.batch_decode(a)
# Cleaning text
text = str(text)
text = re.sub("<pad> ","",text)
text = re.sub("'","",text)
text = text.replace("[", "")
text = text.replace("]", "")
text
```
## Result
```
'In a statement after that hearing, the BualaCard’s representative was told that the event had changed, that he had turned up.'
```
## ROGUE Score
**0.3025**
|
kingabzpro/wav2vec2-60-Urdu-V8 | 26321cf95b2813b91fcb41ea5b0107d1288dafc5 | 2022-03-24T11:55:52.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ur",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | kingabzpro | null | kingabzpro/wav2vec2-60-Urdu-V8 | 12 | 1 | transformers | 10,601 | ---
language:
- ur
license: apache-2.0
tags:
- automatic-speech-recognition
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
- cer
model-index:
- name: wav2vec2-urdu-V8-Abid
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice ur
args: ur
metrics:
- type: wer
value: 44.63
name: Test WER
args:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 50
- mixed_precision_training: Native AMPP
- type: cer
value: 18.82
name: Test CER
args:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 50
- mixed_precision_training: Native AMPP
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-60-Urdu-V8
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-urdu-urm-60](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-urdu-urm-60) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 11.4832
- Wer: 0.5729
- Cer: 0.3170
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 19.671 | 8.33 | 100 | 7.7671 | 0.8795 | 0.4492 |
| 2.085 | 16.67 | 200 | 9.2759 | 0.6201 | 0.3320 |
| 0.6633 | 25.0 | 300 | 8.7025 | 0.5738 | 0.3104 |
| 0.388 | 33.33 | 400 | 10.2286 | 0.5852 | 0.3128 |
| 0.2822 | 41.67 | 500 | 11.1953 | 0.5738 | 0.3174 |
| 0.2293 | 50.0 | 600 | 11.4832 | 0.5729 | 0.3170 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
l3cube-pune/hate-bert-hasoc-marathi | f7fe5eff28b6bcaeacd926f89bc100af394ac210 | 2022-06-12T12:38:26.000Z | [
"pytorch",
"tf",
"albert",
"text-classification",
"mr",
"dataset:HASOC 2021",
"arxiv:2110.12200",
"transformers",
"license:cc-by-4.0"
]
| text-classification | false | l3cube-pune | null | l3cube-pune/hate-bert-hasoc-marathi | 12 | 1 | transformers | 10,602 | ---
language: mr
tags:
- albert
license: cc-by-4.0
datasets:
- HASOC 2021
widget:
- text: "I like you. </s></s> I love you."
---
## hate-bert-hasoc-marathi
hate-bert-hasoc-marathi is a binary hate speech model fine-tuned on Marathi Hasoc Hate Speech Dataset 2021.
The label mappings are 0 -> None, 1 -> Hate.
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2110.12200)
A new version of Marathi Hate Speech Detection models can be found here: <br>
binary: https://huggingface.co/l3cube-pune/mahahate-bert <br>
multi label: https://huggingface.co/l3cube-pune/mahahate-multi-roberta <br>
```
@article{velankar2021hate,
title={Hate and Offensive Speech Detection in Hindi and Marathi},
author={Velankar, Abhishek and Patil, Hrushikesh and Gore, Amol and Salunke, Shubham and Joshi, Raviraj},
journal={arXiv preprint arXiv:2110.12200},
year={2021}
}
``` |
llangnickel/long-covid-classification | d914996f532b6a7b81f375ddc665551eae5099b8 | 2022-07-04T19:28:06.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:mit"
]
| text-classification | false | llangnickel | null | llangnickel/long-covid-classification | 12 | null | transformers | 10,603 | ---
license: mit
---
## long-covid-classification
We fine-tuned bert-base-cased using a [manually curated dataset](https://huggingface.co/llangnickel/long-covid-classification-data) to train a Sequence Classification model able to distinguish between long COVID and non-long COVID-related documents.
## Used hyper parameters
|Parameter|Value|
|---|---|
|Learning rate|3e-5|
|Batch size|16|
|Number of epochs|4|
|Sequence Length|512|
## Metrics
|Precision [%]|Recall [%]|F1-score [%]|
|---|---|---|
|91.18|91.18|91.18|
## How to load the model
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("llangnickel/long-covid-classification", use_auth_token=True)
label_dict = {0: "nonLongCOVID", 1: "longCOVID"}
model = AutoModelForSequenceClassification.from_pretrained("llangnickel/long-covid-classification", use_auth_token=True, num_labels=len(label_dict))
```
## Citation
@article{10.1093/database/baac048,
author = {Langnickel, Lisa and Darms, Johannes and Heldt, Katharina and Ducks, Denise and Fluck, Juliane},
title = "{Continuous development of the semantic search engine preVIEW: from COVID-19 to long COVID}",
journal = {Database},
volume = {2022},
year = {2022},
month = {07},
issn = {1758-0463},
doi = {10.1093/database/baac048},
url = {https://doi.org/10.1093/database/baac048},
note = {baac048},
eprint = {https://academic.oup.com/database/article-pdf/doi/10.1093/database/baac048/44371817/baac048.pdf},
} |
luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5_ep_10 | f34fe7e07a03c6016d9e1957a5beb11daf35acc6 | 2021-10-25T19:54:26.000Z | [
"pytorch",
"roberta",
"transformers"
]
| null | false | luffycodes | null | luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5_ep_10 | 12 | null | transformers | 10,604 | Entry not found |
luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5_oppo | 539ca45cfe17fa403bd8e6ed55f37188337100e6 | 2021-10-26T07:55:09.000Z | [
"pytorch",
"roberta",
"transformers"
]
| null | false | luffycodes | null | luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5_oppo | 12 | null | transformers | 10,605 | Entry not found |
luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_5e6_bb_lr_5e6_wu_7k_grad_adam | 02a8790e1571a2aae34f04791d017da45c010939 | 2021-10-30T23:35:38.000Z | [
"pytorch",
"roberta",
"transformers"
]
| null | false | luffycodes | null | luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_5e6_bb_lr_5e6_wu_7k_grad_adam | 12 | null | transformers | 10,606 | Entry not found |
luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_5e6_bb_lr_5e6_wu_7k_grad_adam_mask | ca63ce03489c34f3306ad7d21496823aa9a2c5c1 | 2021-10-31T20:56:51.000Z | [
"pytorch",
"roberta",
"transformers"
]
| null | false | luffycodes | null | luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_5e6_bb_lr_5e6_wu_7k_grad_adam_mask | 12 | null | transformers | 10,607 | Entry not found |
lvwerra/gpt2-imdb-pos | 649cebcaa8604cbf6124f3d26651d9f5cc1e0e56 | 2021-05-23T08:37:41.000Z | [
"pytorch",
"gpt2",
"transformers"
]
| null | false | lvwerra | null | lvwerra/gpt2-imdb-pos | 12 | null | transformers | 10,608 | # GPT2-IMDB-pos
## What is it?
A small GPT2 (`lvwerra/gpt2-imdb`) language model fine-tuned to produce positive movie reviews based the [IMDB dataset](https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews). The model is trained with rewards from a BERT sentiment classifier (`lvwerra/gpt2-imdb`) via PPO.
## Training setting
The model was trained for `100` optimisation steps with a batch size of `256` which corresponds to `25600` training samples. The full experiment setup can be found in the Jupyter notebook in the [trl repo](https://lvwerra.github.io/trl/04-gpt2-sentiment-ppo-training/).
## Examples
A few examples of the model response to a query before and after optimisation:
| query | response (before) | response (after) | rewards (before) | rewards (after) |
|-------|-------------------|------------------|------------------|-----------------|
|I'd never seen a |heavier, woodier example of Victorian archite... |film of this caliber, and I think it's wonder... |3.297736 |4.158653|
|I love John's work |but I actually have to write language as in w... |and I hereby recommend this film. I am really... |-1.904006 |4.159198 |
|I's a big struggle |to see anyone who acts in that way. by Jim Th... |, but overall I'm happy with the changes even ... |-1.595925 |2.651260|
|
m3hrdadfi/albert-fa-base-v2-sentiment-snappfood | e02e74a033a1f3a43b101153c666894f0d40c2df | 2020-12-26T08:49:28.000Z | [
"pytorch",
"tf",
"albert",
"text-classification",
"fa",
"transformers",
"license:apache-2.0"
]
| text-classification | false | m3hrdadfi | null | m3hrdadfi/albert-fa-base-v2-sentiment-snappfood | 12 | null | transformers | 10,609 | ---
language: fa
license: apache-2.0
---
# ALBERT Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
[ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT.
Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
### SnappFood
[Snappfood](https://snappfood.ir/) (an online food delivery company) user comments containing 70,000 comments with two labels (i.e. polarity classification):
1. Happy
2. Sad
| Label | # |
|:--------:|:-----:|
| Negative | 35000 |
| Positive | 35000 |
**Download**
You can download the dataset from [here](https://drive.google.com/uc?id=15J4zPN1BD7Q_ZIQ39VeFquwSoW8qTxgu)
## Results
The following table summarizes the F1 score obtained as compared to other models and architectures.
| Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:|
| SnappFood User Comments | 85.79 | 88.12 | 87.87 | - |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@misc{ALBERTPersian,
author = {Mehrdad Farahani},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo. |
malay-huggingface/t5-small-bahasa-cased | c2bdb69b07dbb25f2b329def9b776210dca6de0d | 2021-09-05T12:53:30.000Z | [
"pytorch",
"t5",
"feature-extraction",
"ms",
"transformers"
]
| feature-extraction | false | malay-huggingface | null | malay-huggingface/t5-small-bahasa-cased | 12 | null | transformers | 10,610 | ---
language: ms
---
# t5-small-bahasa-cased
Pretrained T5 small language model for Malay.
## Pretraining Corpus
`t5-small-bahasa-cased` model was pretrained on multiple tasks. Below is list of tasks we trained on,
1. Language masking task on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
2. News title prediction on bahasa news.
3. Next sentence prediction on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
4. Translated QA Natural.
5. Text Similarity task on translated SNLI and translated MNLI.
6. EN-MS translation.
7. MS-EN translation.
8. Abstractive Summarization.
9. Knowledge Graph triples generation.
10. Paraphrase.
Preparing steps can reproduce at https://github.com/huseinzol05/malaya/tree/master/pretrained-model/t5/prepare
## Pretraining details
- This model was trained using Google T5 repository https://github.com/google-research/text-to-text-transfer-transformer, on v3-8 TPU.
- All steps can reproduce from here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/t5
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import T5Tokenizer, T5Model
model = T5Model.from_pretrained('malay-huggingface/t5-small-bahasa-cased')
tokenizer = T5Tokenizer.from_pretrained('malay-huggingface/t5-small-bahasa-cased')
```
## Example using T5ForConditionalGeneration
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('malay-huggingface/t5-small-bahasa-cased')
model = T5ForConditionalGeneration.from_pretrained('malay-huggingface/t5-small-bahasa-cased')
input_ids = tokenizer.encode('soalan: siapakah perdana menteri malaysia?', return_tensors = 'pt')
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
Output is,
```
'Mahathir Mohamad'
```
## Supported prefix
1. `soalan: {string}`, trained using Natural QA.
2. `ringkasan: {string}`, for abstractive summarization.
3. `tajuk: {string}`, for abstractive title.
4. `parafrasa: {string}`, for abstractive paraphrase.
5. `terjemah Inggeris ke Melayu: {string}`, for EN-MS translation.
6. `terjemah Melayu ke Inggeris: {string}`, for MS-EN translation.
7. `grafik pengetahuan: {string}`, for MS text to EN Knowledge Graph triples format.
8. `ayat1: {string1} ayat2: {string2}`, semantic similarity. |
manandey/wav2vec2-large-xlsr-_irish | cd3bd4e5203a049b6739790627cd843fcf5eb287 | 2022-03-25T16:53:49.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ga",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | manandey | null | manandey/wav2vec2-large-xlsr-_irish | 12 | null | transformers | 10,611 | ---
language: ga
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Irish by Manan Dey
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ga-IE
type: common_voice
args: ga-IE
metrics:
- name: Test WER
type: wer
value: 42.34
---
# Wav2Vec2-Large-XLSR-53-Irish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Irish using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ga-IE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-_irish")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-_irish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ga-IE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-_irish")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-_irish")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\’\–\(\)]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 42.34%
## Training
The Common Voice `train`, `validation` datasets were used for training. |
manav/causal_qa | cd86c3a19560f9135165aa89c47230681cbcc458 | 2021-05-19T22:48:49.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | manav | null | manav/causal_qa | 12 | null | transformers | 10,612 | This a BERT-based QA model finetuned to answer causal questions. The original model this is based on can be found [here](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2). Analysis of this model is associated with the work found at the following [repo](https://github.com/kstats/CausalQG). |
maroo93/practice00 | 2b1969d39fe0e579d21c0c40173e813083b22d7c | 2021-05-19T23:05:30.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | maroo93 | null | maroo93/practice00 | 12 | null | transformers | 10,613 | Entry not found |
mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili | c5870bf17c7b54bd658e4a8c29f2bec808fc3934 | 2021-11-25T09:04:12.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"sw",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"NER",
"autotrain_compatible"
]
| token-classification | false | mbeukman | null | mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili | 12 | null | transformers | 10,614 | ---
language:
- sw
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
---
# xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-luganda](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Swahili part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili) (This model) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | swa | 88.93 | 87.64 | 90.25 | 83.00 | 92.00 | 79.00 | 95.00 |
| [xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | swa | 88.36 | 86.95 | 89.82 | 86.00 | 91.00 | 77.00 | 94.00 |
| [xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | swa | 87.75 | 86.55 | 88.97 | 85.00 | 92.00 | 77.00 | 91.00 |
| [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-luo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | swa | 87.93 | 86.91 | 88.97 | 83.00 | 91.00 | 76.00 | 94.00 |
| [xlm-roberta-base-finetuned-naija-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-swahili) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | swa | 90.36 | 88.59 | 92.20 | 86.00 | 93.00 | 79.00 | 96.00 |
| [xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | swa | 87.80 | 86.50 | 89.14 | 86.00 | 90.00 | 78.00 | 93.00 |
| [xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili) | [yor](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) | swa | 87.73 | 86.67 | 88.80 | 85.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-swahili) | [base](https://huggingface.co/xlm-roberta-base) | swa | 88.71 | 86.84 | 90.67 | 83.00 | 91.00 | 79.00 | 95.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-luo | 04c29cd77e99f5753c55c7023c6500188996147a | 2021-11-25T09:04:15.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"luo",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"NER",
"autotrain_compatible"
]
| token-classification | false | mbeukman | null | mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-luo | 12 | null | transformers | 10,615 | ---
language:
- luo
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Jii 2 moko jowito ngimagi ka machielo 1 to ohinyore marach mokalo e masira makoch mar apaya mane otimore e apaya mawuok Oyugis kochimo Chabera e sub county ma Rachuonyo East e County ma Homa Bay ewii odhiambo makawuononi"
---
# xlm-roberta-base-finetuned-luo-finetuned-ner-luo
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Luo part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-luo-finetuned-ner-luo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-luo) (This model) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | luo | 78.71 | 78.91 | 78.52 | 72.00 | 84.00 | 59.00 | 87.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-luo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luo) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | luo | 78.13 | 77.75 | 78.52 | 65.00 | 82.00 | 61.00 | 89.00 |
| [xlm-roberta-base-finetuned-ner-luo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-luo) | [base](https://huggingface.co/xlm-roberta-base) | luo | 75.99 | 76.18 | 75.80 | 71.00 | 76.00 | 62.00 | 85.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-luo'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Jii 2 moko jowito ngimagi ka machielo 1 to ohinyore marach mokalo e masira makoch mar apaya mane otimore e apaya mawuok Oyugis kochimo Chabera e sub county ma Rachuonyo East e County ma Homa Bay ewii odhiambo makawuononi"
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-ner-kinyarwanda | 5dba1567dba74cdf572df06b6f69b8e6cd19d665 | 2021-11-25T09:04:30.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"rw",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"NER",
"autotrain_compatible"
]
| token-classification | false | mbeukman | null | mbeukman/xlm-roberta-base-finetuned-ner-kinyarwanda | 12 | null | transformers | 10,616 | ---
language:
- rw
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Ambasaderi wa EU mu Rwanda , Nicola Bellomo yagize ati “ Inkunga yacu ni imwe mu nkunga yagutse yiswe # TeamEurope ."
---
# xlm-roberta-base-finetuned-ner-kinyarwanda
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Kinyarwanda part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-ner-kinyarwanda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-kinyarwanda) (This model) | [base](https://huggingface.co/xlm-roberta-base) | kin | 74.59 | 72.17 | 77.17 | 70.00 | 75.00 | 70.00 | 82.00 |
| [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-kinyarwanda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-kinyarwanda) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | kin | 79.55 | 75.56 | 83.99 | 69.00 | 79.00 | 77.00 | 90.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | kin | 76.31 | 72.64 | 80.37 | 70.00 | 76.00 | 75.00 | 84.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-kinyarwanda'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Ambasaderi wa EU mu Rwanda , Nicola Bellomo yagize ati “ Inkunga yacu ni imwe mu nkunga yagutse yiswe # TeamEurope ."
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda | f5dbc45ebe3cc5a1735dd354bf45d009f6793d26 | 2021-11-25T09:04:53.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"rw",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"NER",
"autotrain_compatible"
]
| token-classification | false | mbeukman | null | mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda | 12 | null | transformers | 10,617 | ---
language:
- rw
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Ambasaderi wa EU mu Rwanda , Nicola Bellomo yagize ati “ Inkunga yacu ni imwe mu nkunga yagutse yiswe # TeamEurope ."
---
# xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-swahili](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Kinyarwanda part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda) (This model) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | kin | 76.31 | 72.64 | 80.37 | 70.00 | 76.00 | 75.00 | 84.00 |
| [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-kinyarwanda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-kinyarwanda) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | kin | 79.55 | 75.56 | 83.99 | 69.00 | 79.00 | 77.00 | 90.00 |
| [xlm-roberta-base-finetuned-ner-kinyarwanda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-kinyarwanda) | [base](https://huggingface.co/xlm-roberta-base) | kin | 74.59 | 72.17 | 77.17 | 70.00 | 75.00 | 70.00 | 82.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Ambasaderi wa EU mu Rwanda , Nicola Bellomo yagize ati “ Inkunga yacu ni imwe mu nkunga yagutse yiswe # TeamEurope ."
ner_results = nlp(example)
print(ner_results)
```
|
mgreenbe/bertlet-base-uncased-for-sequence-classification | 4304bae03a8712c21a223b933283ad0c827577ac | 2021-11-20T17:23:02.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | mgreenbe | null | mgreenbe/bertlet-base-uncased-for-sequence-classification | 12 | 1 | transformers | 10,618 | ---
tags:
- generated_from_trainer
model-index:
- name: bertlet-base-uncased-for-sequence-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertlet-base-uncased-for-sequence-classification
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
microsoft/unihanlm-base | af5693b4a92ba50b66c557868cf83ef2dfadc392 | 2021-09-22T09:00:56.000Z | [
"pytorch",
"tf",
"xlm",
"feature-extraction",
"zh",
"ja",
"dataset:Wikipedia",
"transformers",
"crosslingual",
"license:apache-2.0"
]
| feature-extraction | false | microsoft | null | microsoft/unihanlm-base | 12 | 1 | transformers | 10,619 | ---
language:
- zh
- ja
tags:
- crosslingual
license: apache-2.0
datasets:
- Wikipedia
---
# Unihan LM: Coarse-to-Fine Chinese-Japanese Language Model Pretraining with the Unihan Database
## Model description
Chinese and Japanese share many characters with similar surface morphology. To better utilize the shared knowledge across the languages, we propose UnihanLM, a self-supervised Chinese-Japanese pretrained masked language model (MLM) with a novel two-stage coarse-to-fine training approach. We exploit Unihan, a ready-made database constructed by linguistic experts to first merge morphologically similar characters into clusters. The resulting clusters are used to replace the original characters in sentences for the coarse-grained pretraining of the MLM. Then, we restore the clusters back to the original characters in sentences for the fine-grained pretraining to learn the representation of the specific characters. We conduct extensive experiments on a variety of Chinese and Japanese NLP benchmarks, showing that our proposed UnihanLM is effective on both mono- and cross-lingual Chinese and Japanese tasks, shedding light on a new path to exploit the homology of languages. [Paper](https://www.aclweb.org/anthology/2020.aacl-main.24/)
## Intended uses & limitations
#### How to use
Use it like how you use XLM :)
#### Limitations and bias
The training corpus is solely from Wikipedia so the model may perform worse on informal text data. Be careful with English words! The tokenizer would cut it to characters.
## Training data
We use Chinese and Japanese Wikipedia to train the model.
## Training procedure
Please refer to our paper: https://www.aclweb.org/anthology/2020.aacl-main.24/
## Eval results
Please refer to our paper: https://www.aclweb.org/anthology/2020.aacl-main.24/
### BibTeX entry and citation info
```bibtex
@inproceedings{xu-etal-2020-unihanlm,
title = "{U}nihan{LM}: Coarse-to-Fine {C}hinese-{J}apanese Language Model Pretraining with the Unihan Database",
author = "Xu, Canwen and
Ge, Tao and
Li, Chenliang and
Wei, Furu",
booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing",
month = dec,
year = "2020",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.aacl-main.24",
pages = "201--211"
}
``` |
microsoft/unilm-large-cased | 5818e0466f86ed8e4b2be9423afca2a6398ac2b9 | 2020-04-28T21:22:59.000Z | [
"pytorch",
"transformers"
]
| null | false | microsoft | null | microsoft/unilm-large-cased | 12 | null | transformers | 10,620 | Entry not found |
midas/gupshup_e2e_gpt | 3d81322149ff40f77f9861498e390ebfdebf06c9 | 2021-11-14T02:08:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"arxiv:1910.04073",
"transformers"
]
| text-generation | false | midas | null | midas/gupshup_e2e_gpt | 12 | null | transformers | 10,621 | # Gupshup
GupShup: Summarizing Open-Domain Code-Switched Conversations EMNLP 2021
Paper: [https://aclanthology.org/2021.emnlp-main.499.pdf](https://aclanthology.org/2021.emnlp-main.499.pdf)
Github: [https://github.com/midas-research/gupshup](https://github.com/midas-research/gupshup)
### Dataset
Please request for the Gupshup data using [this Google form](https://docs.google.com/forms/d/1zvUk7WcldVF3RCoHdWzQPzPprtSJClrnHoIOYbzaJEI/edit?ts=61381ec0).
Dataset is available for `Hinglish Dilaogues to English Summarization`(h2e) and `English Dialogues to English Summarization`(e2e). For each task, Dialogues/conversastion have `.source`(train.source) as file extension whereas Summary has `.target`(train.target) file extension. ".source" file need to be provided to `input_path` and ".target" file to `reference_path` argument in the scripts.
## Models
All model weights are available on the Huggingface model hub. Users can either directly download these weights in their local and provide this path to `model_name` argument in the scripts or use the provided alias (to `model_name` argument) in scripts directly; this will lead to download weights automatically by scripts.
Model names were aliased in "gupshup_TASK_MODEL" sense, where "TASK" can be h2e,e2e and MODEL can be mbart, pegasus, etc., as listed below.
**1. Hinglish Dialogues to English Summary (h2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_h2e_mbart](https://huggingface.co/midas/gupshup_h2e_mbart) |
| PEGASUS | [midas/gupshup_h2e_pegasus](https://huggingface.co/midas/gupshup_h2e_pegasus) |
| T5 MTL | [midas/gupshup_h2e_t5_mtl](https://huggingface.co/midas/gupshup_h2e_t5_mtl) |
| T5 | [midas/gupshup_h2e_t5](https://huggingface.co/midas/gupshup_h2e_t5) |
| BART | [midas/gupshup_h2e_bart](https://huggingface.co/midas/gupshup_h2e_bart) |
| GPT-2 | [midas/gupshup_h2e_gpt](https://huggingface.co/midas/gupshup_h2e_gpt) |
**2. English Dialogues to English Summary (e2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_e2e_mbart](https://huggingface.co/midas/gupshup_e2e_mbart) |
| PEGASUS | [midas/gupshup_e2e_pegasus](https://huggingface.co/midas/gupshup_e2e_pegasus) |
| T5 MTL | [midas/gupshup_e2e_t5_mtl](https://huggingface.co/midas/gupshup_e2e_t5_mtl) |
| T5 | [midas/gupshup_e2e_t5](https://huggingface.co/midas/gupshup_e2e_t5) |
| BART | [midas/gupshup_e2e_bart](https://huggingface.co/midas/gupshup_e2e_bart) |
| GPT-2 | [midas/gupshup_e2e_gpt](https://huggingface.co/midas/gupshup_e2e_gpt) |
## Inference
### Using command line
1. Clone this repo and create a python virtual environment (https://docs.python.org/3/library/venv.html). Install the required packages using
```
git clone https://github.com/midas-research/gupshup.git
pip install -r requirements.txt
```
2. run_eval script has the following arguments.
* **model_name** : Path or alias to one of our models available on Huggingface as listed above.
* **input_path** : Source file or path to file containing conversations, which will be summarized.
* **save_path** : File path where to save summaries generated by the model.
* **reference_path** : Target file or path to file containing summaries, used to calculate matrices.
* **score_path** : File path where to save scores.
* **bs** : Batch size
* **device**: Cuda devices to use.
Please make sure you have downloaded the Gupshup dataset using the above google form and provide the correct path to these files in the argument's `input_path` and `refrence_path.` Or you can simply put `test.source` and `test.target` in `data/h2e/`(hinglish to english) or `data/e2e/`(english to english) folder. For example, to generate English summaries from Hinglish dialogues using the mbart model, run the following command
```
python run_eval.py \
--model_name midas/gupshup_h2e_mbart \
--input_path data/h2e/test.source \
--save_path generated_summary.txt \
--reference_path data/h2e/test.target \
--score_path scores.txt \
--bs 8
```
Another example, to generate English summaries from English dialogues using the Pegasus model
```
python run_eval.py \
--model_name midas/gupshup_e2e_pegasus \
--input_path data/e2e/test.source \
--save_path generated_summary.txt \
--reference_path data/e2e/test.target \
--score_path scores.txt \
--bs 8
```
Please create an issue if you are facing any difficulties in replicating the results.
### References
Please cite [[1]](https://arxiv.org/abs/1910.04073) if you found the resources in this repository useful.
[1] Mehnaz, Laiba, Debanjan Mahata, Rakesh Gosangi, Uma Sushmitha Gunturi, Riya Jain, Gauri Gupta, Amardeep Kumar, Isabelle G. Lee, Anish Acharya, and Rajiv Shah. [*GupShup: Summarizing Open-Domain Code-Switched Conversations*](https://aclanthology.org/2021.emnlp-main.499.pdf)
```
@inproceedings{mehnaz2021gupshup,
title={GupShup: Summarizing Open-Domain Code-Switched Conversations},
author={Mehnaz, Laiba and Mahata, Debanjan and Gosangi, Rakesh and Gunturi, Uma Sushmitha and Jain, Riya and Gupta, Gauri and Kumar, Amardeep and Lee, Isabelle G and Acharya, Anish and Shah, Rajiv},
booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
pages={6177--6192},
year={2021}
}
```
|
mlkorra/OGBV-gender-bert-hi-en | b494c489a82b7c0f9f44804d3d7398b1d3b33e32 | 2021-09-07T15:13:25.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | mlkorra | null | mlkorra/OGBV-gender-bert-hi-en | 12 | null | transformers | 10,622 | ## BERT Model for OGBV gendered text classification
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("mlkorra/OGBV-gender-bert-hi-en")
model = AutoModelForSequenceClassification.from_pretrained("mlkorra/OGBV-gender-bert-hi-en")
```
## Model Performance
|Metric|dev|test|
|---|--|--|
|Accuracy|0.88|0.81|
|F1(weighted)|0.86|0.80|
|
mobedkova/wav2vec2-large-xls-r-300m-ru-test | 042ee97adccd20b0b161130bb3edcba574e9abbb | 2022-03-23T18:27:44.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ru",
"dataset:common_voice",
"transformers",
"hf-asr-leaderboard",
"robust-speech-event",
"model-index"
]
| automatic-speech-recognition | false | mobedkova | null | mobedkova/wav2vec2-large-xls-r-300m-ru-test | 12 | null | transformers | 10,623 | ---
language:
- ru
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: Russian Wav2Vec2 XLS-R 300m
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice-7.0
type: mozilla-foundation/common_voice_7_0
args: ru
metrics:
- name: Test WER
type: wer
value: 27.81
- name: Test CER
type: cer
value: 8.83
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ru
metrics:
- name: Test WER
type: wer
value: 44.64
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ru
metrics:
- name: Test WER
type: wer
value: 42.51
---
# Russian Speech Recognition model |
mrm8488/AfricanBERTa | d8817ee58e1a854a2b33604b229fb18356e49b2c | 2021-05-20T18:00:12.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | mrm8488 | null | mrm8488/AfricanBERTa | 12 | null | transformers | 10,624 | Entry not found |
mrm8488/RuPERTa-base-finetuned-ner | c33c7f9b31937060377e5fd630e50dce23cd1b3c | 2021-05-20T18:06:10.000Z | [
"pytorch",
"jax",
"roberta",
"token-classification",
"es",
"transformers",
"autotrain_compatible"
]
| token-classification | false | mrm8488 | null | mrm8488/RuPERTa-base-finetuned-ner | 12 | 1 | transformers | 10,625 | ---
language: es
thumbnail:
---
# RuPERTa-base (Spanish RoBERTa) + NER 🎃🏷
This model is a fine-tuned on [NER-C](https://www.kaggle.com/nltkdata/conll-corpora) version of [RuPERTa-base](https://huggingface.co/mrm8488/RuPERTa-base) for **NER** downstream task.
## Details of the downstream task (NER) - Dataset
- [Dataset: CONLL Corpora ES](https://www.kaggle.com/nltkdata/conll-corpora) 📚
| Dataset | # Examples |
| ---------------------- | ----- |
| Train | 329 K |
| Dev | 40 K |
- [Fine-tune on NER script provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner_old.py)
- Labels covered:
```
B-LOC
B-MISC
B-ORG
B-PER
I-LOC
I-MISC
I-ORG
I-PER
O
```
## Metrics on evaluation set 🧾
| Metric | # score |
| :------------------------------------------------------------------------------------: | :-------: |
| F1 | **77.55**
| Precision | **75.53** |
| Recall | **79.68** |
## Model in action 🔨
Example of usage:
```python
import torch
from transformers import AutoModelForTokenClassification, AutoTokenizer
id2label = {
"0": "B-LOC",
"1": "B-MISC",
"2": "B-ORG",
"3": "B-PER",
"4": "I-LOC",
"5": "I-MISC",
"6": "I-ORG",
"7": "I-PER",
"8": "O"
}
text ="Julien, CEO de HF, nació en Francia."
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
outputs = model(input_ids)
last_hidden_states = outputs[0]
for m in last_hidden_states:
for index, n in enumerate(m):
if(index > 0 and index <= len(text.split(" "))):
print(text.split(" ")[index-1] + ": " + id2label[str(torch.argmax(n).item())])
'''
Output:
--------
Julien,: I-PER
CEO: O
de: O
HF,: B-ORG
nació: I-PER
en: I-PER
Francia.: I-LOC
'''
```
Yeah! Not too bad 🎉
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrm8488/distilgpt2-finedtuned-meditations | 61c307b75f644636aa761587461f3eda8ba643be | 2021-05-23T10:20:32.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | mrm8488 | null | mrm8488/distilgpt2-finedtuned-meditations | 12 | 1 | transformers | 10,626 | Entry not found |
mrm8488/funnel-transformer-intermediate-mnli | 0d61e100a125b14a793f332085594790fdff1b51 | 2020-11-09T00:09:39.000Z | [
"pytorch",
"funnel",
"text-classification",
"transformers"
]
| text-classification | false | mrm8488 | null | mrm8488/funnel-transformer-intermediate-mnli | 12 | null | transformers | 10,627 | Entry not found |
mrm8488/t5-base-finetuned-tab_fact | f3ccb2da496d7757953e8f68cdb20f5cfab672ae | 2021-06-23T13:04:31.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | mrm8488 | null | mrm8488/t5-base-finetuned-tab_fact | 12 | null | transformers | 10,628 | Entry not found |
napsternxg/scibert_scivocab_cased_SDU21_AI | 9a1bcabf4e9905d0633a5c3c72aba58188b5c364 | 2021-05-20T01:08:08.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | napsternxg | null | napsternxg/scibert_scivocab_cased_SDU21_AI | 12 | null | transformers | 10,629 | scibert_scivocab_cased submission for SDU21 Task 1 AI
|
napsternxg/scibert_scivocab_uncased_ft_SDU21_AI | 2cc94528633a521bf71a3d64794941fdd9ce54a3 | 2021-05-20T01:09:59.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | napsternxg | null | napsternxg/scibert_scivocab_uncased_ft_SDU21_AI | 12 | null | transformers | 10,630 | scibert_scivocab_uncased_ft MLM pretrained on SDU21 Task 1 + 2
|
ncoop57/code-clippy-125M-py | 8b49d56310bcbbfb6c6d02c28e2becba641d5a20 | 2021-12-29T13:11:41.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
]
| text-generation | false | ncoop57 | null | ncoop57/code-clippy-125M-py | 12 | null | transformers | 10,631 | Entry not found |
neuralspace-reverie/indic-transformers-bn-bert | 571ae80ab32841d55a114ab44708c4e9eb3fe3fc | 2021-05-20T01:33:26.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"bn",
"transformers",
"MaskedLM",
"Bengali",
"autotrain_compatible"
]
| fill-mask | false | neuralspace-reverie | null | neuralspace-reverie/indic-transformers-bn-bert | 12 | null | transformers | 10,632 | ---
language:
- bn
tags:
- MaskedLM
- Bengali
---
# Indic-Transformers Bengali BERT
## Model description
This is a BERT language model pre-trained on ~3 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-bn-bert')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-bn-bert')
text = "আপনি কেমন আছেন?"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 6, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
neuralspace-reverie/indic-transformers-bn-xlmroberta | 2a97580fb72a18525d8d071dcc9a3bb348f196cf | 2020-12-11T21:57:15.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"bn",
"transformers",
"MaskedLM",
"Bengali",
"XLMRoBERTa",
"Question-Answering",
"Token Classification",
"Text Classification",
"autotrain_compatible"
]
| fill-mask | false | neuralspace-reverie | null | neuralspace-reverie/indic-transformers-bn-xlmroberta | 12 | null | transformers | 10,633 | ---
language:
- bn
tags:
- MaskedLM
- Bengali
- XLMRoBERTa
- Question-Answering
- Token Classification
- Text Classification
---
# Indic-Transformers Bengali XLMRoBERTa
## Model description
This is a XLMRoBERTa language model pre-trained on ~3 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-bn-xlmroberta')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-bn-xlmroberta')
text = "আপনি কেমন আছেন?"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 5, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
openclimatefix/metnet-2 | bf3ff79ede5c30bf69aad7e51b4be03eb9bb7798 | 2022-02-02T13:26:42.000Z | [
"pytorch",
"transformers"
]
| null | false | openclimatefix | null | openclimatefix/metnet-2 | 12 | null | transformers | 10,634 | Entry not found |
openclimatefix/metnet | bd97bbd638cad466f9d58739c1a7381270a6fd28 | 2022-02-02T13:26:32.000Z | [
"pytorch",
"transformers"
]
| null | false | openclimatefix | null | openclimatefix/metnet | 12 | 1 | transformers | 10,635 | Entry not found |
pablouribe/xls-r-spanish-test | a3da82ef93f7e26dc4fcd27585a24de330f39f9c | 2022-03-23T18:27:46.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | pablouribe | null | pablouribe/xls-r-spanish-test | 12 | null | transformers | 10,636 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: xls-r-spanish-test
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: es
metrics:
- name: Test WER
type: wer
value: 13.89
- name: Test CER
type: cer
value: 3.85
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: es
metrics:
- name: Test WER
type: wer
value: 37.66
- name: Test CER
type: cer
value: 15.32
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: es
metrics:
- name: Test WER
type: wer
value: 41.17
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - ES dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1461
- Wer: 1.0063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.953 | 0.15 | 1000 | 2.9528 | 1.0 |
| 1.1519 | 0.3 | 2000 | 0.3735 | 1.0357 |
| 1.0278 | 0.45 | 3000 | 0.2529 | 1.0390 |
| 0.9922 | 0.61 | 4000 | 0.2208 | 1.0270 |
| 0.9618 | 0.76 | 5000 | 0.2088 | 1.0294 |
| 0.9364 | 0.91 | 6000 | 0.2019 | 1.0214 |
| 0.9179 | 1.06 | 7000 | 0.1940 | 1.0294 |
| 0.9154 | 1.21 | 8000 | 0.1915 | 1.0290 |
| 0.8985 | 1.36 | 9000 | 0.1837 | 1.0211 |
| 0.9055 | 1.51 | 10000 | 0.1838 | 1.0273 |
| 0.8861 | 1.67 | 11000 | 0.1765 | 1.0139 |
| 0.892 | 1.82 | 12000 | 0.1723 | 1.0188 |
| 0.8778 | 1.97 | 13000 | 0.1735 | 1.0092 |
| 0.8645 | 2.12 | 14000 | 0.1707 | 1.0106 |
| 0.8595 | 2.27 | 15000 | 0.1713 | 1.0186 |
| 0.8392 | 2.42 | 16000 | 0.1686 | 1.0053 |
| 0.8436 | 2.57 | 17000 | 0.1653 | 1.0096 |
| 0.8405 | 2.73 | 18000 | 0.1689 | 1.0077 |
| 0.8382 | 2.88 | 19000 | 0.1645 | 1.0114 |
| 0.8247 | 3.03 | 20000 | 0.1647 | 1.0078 |
| 0.8219 | 3.18 | 21000 | 0.1611 | 1.0026 |
| 0.8024 | 3.33 | 22000 | 0.1580 | 1.0062 |
| 0.8087 | 3.48 | 23000 | 0.1578 | 1.0038 |
| 0.8097 | 3.63 | 24000 | 0.1556 | 1.0057 |
| 0.8094 | 3.79 | 25000 | 0.1552 | 1.0035 |
| 0.7836 | 3.94 | 26000 | 0.1516 | 1.0052 |
| 0.8042 | 4.09 | 27000 | 0.1515 | 1.0054 |
| 0.7925 | 4.24 | 28000 | 0.1499 | 1.0031 |
| 0.7855 | 4.39 | 29000 | 0.1490 | 1.0041 |
| 0.7814 | 4.54 | 30000 | 0.1482 | 1.0068 |
| 0.7859 | 4.69 | 31000 | 0.1460 | 1.0066 |
| 0.7819 | 4.85 | 32000 | 0.1464 | 1.0062 |
| 0.7784 | 5.0 | 33000 | 0.1460 | 1.0063 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
|
para-zhou/cunlp-gpt2-dialog | 8e7cce7792a2198a08de9c06a6aa661cf6a68f6e | 2021-05-23T10:56:01.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | para-zhou | null | para-zhou/cunlp-gpt2-dialog | 12 | null | transformers | 10,637 | Entry not found |
patrickvonplaten/wav2vec2-100m-mls-german-ft-2 | e73289c8ed3b69de81554d4497ece7a715a760e9 | 2021-11-16T00:01:09.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:multilingual_librispeech",
"transformers",
"multilingual_librispeech",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-100m-mls-german-ft-2 | 12 | null | transformers | 10,638 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- multilingual_librispeech
- generated_from_trainer
datasets:
- multilingual_librispeech
model-index:
- name: wav2vec2-100m-mls-german-ft-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-100m-mls-german-ft-2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-100m](https://huggingface.co/facebook/wav2vec2-xls-r-100m) on the MULTILINGUAL_LIBRISPEECH - GERMAN dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9304
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 2.9545 | 14.29 | 500 | 2.9354 | 1.0 |
| 2.9537 | 28.57 | 1000 | 2.9359 | 1.0 |
| 2.9602 | 42.86 | 1500 | 2.9302 | 1.0 |
| 2.9586 | 57.14 | 2000 | 2.9298 | 1.0 |
| 2.9331 | 71.43 | 2500 | 2.9314 | 1.0 |
| 2.9321 | 85.71 | 3000 | 2.9304 | 1.0 |
| 2.9652 | 100.0 | 3500 | 2.9304 | 1.0 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-base-100h-2nd-try | 7f9ffca91cd9d03f84843abe410844e375448646 | 2021-11-04T15:41:08.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"transformers",
"audio",
"license:apache-2.0"
]
| automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-base-100h-2nd-try | 12 | null | transformers | 10,639 | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
license: apache-2.0
widget:
- example_title: IEMOCAP sample 1
src: https://cdn-media.huggingface.co/speech_samples/IEMOCAP_Ses01F_impro03_F013.wav
- example_title: IEMOCAP sample 2
src: https://cdn-media.huggingface.co/speech_samples/IEMOCAP_Ses01F_impro04_F000.wav
- example_title: LibriSpeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/LibriSpeech_61-70968-0000.flac
- example_title: LibriSpeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/LibriSpeech_61-70968-0001.flac
- example_title: VoxCeleb sample 1
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb1_00003.wav
- example_title: VoxCeleb sample 2
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb_00004.wav
---
Second fine-tuning try of `wav2vec2-base`. Results are similar to the ones reported in https://huggingface.co/facebook/wav2vec2-base-100h.
Model was trained on *librispeech-clean-train.100* with following hyper-parameters:
- 2 GPUs Titan RTX
- Total update steps 11000
- Batch size per GPU: 32 corresponding to a *total batch size* of ca. ~750 seconds
- Adam with linear decaying learning rate with 3000 warmup steps
- dynamic padding for batch
- fp16
- attention_mask was **not** used during training
Check: https://wandb.ai/patrickvonplaten/huggingface/runs/1yrpescx?workspace=user-patrickvonplaten
*Result (WER)* on Librispeech:
| "clean" (% rel difference to results in paper) | "other" (% rel difference to results in paper) |
|---|---|
| 6.2 (-1.6%) | 15.2 (-11.2%)| |
patrickvonplaten/wavlm-libri-clean-100h-large | e70e3a062ec399c46008ee55d1fb52c7ba338d5c | 2021-12-17T13:40:58.000Z | [
"pytorch",
"tensorboard",
"wavlm",
"automatic-speech-recognition",
"transformers",
"librispeech_asr",
"generated_from_trainer",
"wavlm_libri_finetune",
"model-index"
]
| automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wavlm-libri-clean-100h-large | 12 | 1 | transformers | 10,640 | ---
tags:
- automatic-speech-recognition
- librispeech_asr
- generated_from_trainer
- wavlm_libri_finetune
model-index:
- name: wavlm-librispeech-clean-100h-dist
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-libri-clean-100h-large
This model is a fine-tuned version of [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) on the LIBRISPEECH_ASR - CLEAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0601
- Wer: 0.0491
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8069 | 0.34 | 300 | 0.7510 | 0.5809 |
| 0.2483 | 0.67 | 600 | 0.2023 | 0.1929 |
| 0.1033 | 1.01 | 900 | 0.1123 | 0.1028 |
| 0.0742 | 1.35 | 1200 | 0.0858 | 0.0771 |
| 0.057 | 1.68 | 1500 | 0.0722 | 0.0663 |
| 0.0421 | 2.02 | 1800 | 0.0682 | 0.0582 |
| 0.0839 | 2.35 | 2100 | 0.0630 | 0.0534 |
| 0.0307 | 2.69 | 2400 | 0.0603 | 0.0508 |
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
|
persiannlp/mbert-base-parsinlu-entailment | de5fd7fbf87a6f9e157ec1247fa234133f496824 | 2021-09-23T16:19:47.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"fa",
"multilingual",
"dataset:parsinlu",
"transformers",
"entailment",
"parsbert",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0"
]
| text-classification | false | persiannlp | null | persiannlp/mbert-base-parsinlu-entailment | 12 | null | transformers | 10,641 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- entailment
- parsbert
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Textual Entailment (مدل برای پاسخ به استلزام منطقی)
This is a model for textual entailment problems.
Here is an example of how you can run this model:
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import numpy as np
labels = ["entails", "contradicts", "neutral"]
model_name_or_path = "persiannlp/mbert-base-parsinlu-entailment"
model = AutoModelForSequenceClassification.from_pretrained(model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path,)
def model_predict(text_a, text_b):
features = tokenizer( [(text_a, text_b)], padding="max_length", truncation=True, return_tensors='pt')
output = model(**features)
logits = output[0]
probs = torch.nn.functional.softmax(logits, dim=1).tolist()
idx = np.argmax(np.array(probs))
print(labels[idx], probs)
model_predict(
"این مسابقات بین آوریل و دسامبر در هیپودروم ولیفندی در نزدیکی باکرکی ، ۱۵ کیلومتری (۹ مایل) غرب استانبول برگزار می شود.",
"در ولیفندی هیپودروم، مسابقاتی از آوریل تا دسامبر وجود دارد."
)
model_predict(
"آیا کودکانی وجود دارند که نیاز به سرگرمی دارند؟",
"هیچ کودکی هرگز نمی خواهد سرگرم شود.",
)
model_predict(
"ما به سفرهایی رفته ایم که در نهرهایی شنا کرده ایم",
"علاوه بر استحمام در نهرها ، ما به اسپا ها و سونا ها نیز رفته ایم."
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
philschmid/RoBERTa-Banking77 | e45f9df5bcd9e61ee4ffe582d9c0aa3ec1644d60 | 2021-11-04T09:12:24.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:banking77",
"transformers",
"autonlp",
"model-index"
]
| text-classification | false | philschmid | null | philschmid/RoBERTa-Banking77 | 12 | null | transformers | 10,642 | ---
tags: autonlp
language: en
widget:
- text: "I am still waiting on my card?"
datasets:
- banking77
model-index:
- name: RoBERTa-Banking77
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: "BANKING77"
type: banking77
metrics:
- name: Accuracy
type: accuracy
value: 93.51
- name: Macro F1
type: macro-f1
value: 93.49
- name: Weighted F1
type: weighted-f1
value: 93.49
---
# `RoBERTa-Banking77` trained using autoNLP
- Problem type: Multi-class Classification
## Validation Metrics
- Loss: 0.27382662892341614
- Accuracy: 0.935064935064935
- Macro F1: 0.934939412967268
- Micro F1: 0.935064935064935
- Weighted F1: 0.934939412967268
- Macro Precision: 0.9372295644352715
- Micro Precision: 0.935064935064935
- Weighted Precision: 0.9372295644352717
- Macro Recall: 0.9350649350649349
- Micro Recall: 0.935064935064935
- Weighted Recall: 0.935064935064935
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/philschmid/RoBERTa-Banking77
```
Or Python API:
```py
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_id = 'philschmid/RoBERTa-Banking77'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
classifier = pipeline('text-classification', tokenizer=tokenizer, model=model)
classifier('What is the base of the exchange rates?')
``` |
pkushiqiang/bert-degree-major-ner-1000 | f0b5306bd4c4304a9142fff08314ac6255066380 | 2022-02-28T08:05:25.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | pkushiqiang | null | pkushiqiang/bert-degree-major-ner-1000 | 12 | null | transformers | 10,643 | Entry not found |
proycon/bert-ner-cased-sonar1-nld | d3343525caf1d15d2adc7a8e9fb56345fc145019 | 2021-05-20T03:06:13.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | proycon | null | proycon/bert-ner-cased-sonar1-nld | 12 | null | transformers | 10,644 | Entry not found |
remi/bertabs-finetuned-cnndm-extractive-abstractive-summarization | 5a985e99440eed91e9227f5393257ab43a4712d8 | 2021-05-20T04:14:02.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | remi | null | remi/bertabs-finetuned-cnndm-extractive-abstractive-summarization | 12 | null | transformers | 10,645 | Entry not found |
scaperex/online-harassment-bert2 | dcb1fbef60973be645c4b0e8ba8a560561b2d491 | 2021-07-14T15:48:43.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | scaperex | null | scaperex/online-harassment-bert2 | 12 | null | transformers | 10,646 | Entry not found |
seiya/oubiobert-base-uncased | 694c027b394acd2390e7cbcc4e3242e7c893ab72 | 2021-05-20T05:10:40.000Z | [
"pytorch",
"jax",
"bert",
"pretraining",
"arxiv:2005.07202",
"transformers",
"exbert",
"license:apache-2.0"
]
| null | false | seiya | null | seiya/oubiobert-base-uncased | 12 | 1 | transformers | 10,647 | ---
tags:
- exbert
license: apache-2.0
---
# ouBioBERT-Base, Uncased
Bidirectional Encoder Representations from Transformers for Biomedical Text Mining by Osaka University (ouBioBERT) is a language model based on the BERT-Base (Devlin, et al., 2019) architecture. We pre-trained ouBioBERT on PubMed abstracts from the PubMed baseline (ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline) via our method.
The details of the pre-training procedure can be found in Wada, et al. (2020).
## Evaluation
We evaluated the performance of ouBioBERT in terms of the biomedical language understanding evaluation (BLUE) benchmark (Peng, et al., 2019). The numbers are mean (standard deviation) on five different random seeds.
| Dataset | Task Type | Score |
|:----------------|:-----------------------------|-------------:|
| MedSTS | Sentence similarity | 84.9 (0.6) |
| BIOSSES | Sentence similarity | 92.3 (0.8) |
| BC5CDR-disease | Named-entity recognition | 87.4 (0.1) |
| BC5CDR-chemical | Named-entity recognition | 93.7 (0.2) |
| ShARe/CLEFE | Named-entity recognition | 80.1 (0.4) |
| DDI | Relation extraction | 81.1 (1.5) |
| ChemProt | Relation extraction | 75.0 (0.3) |
| i2b2 2010 | Relation extraction | 74.0 (0.8) |
| HoC | Document classification | 86.4 (0.5) |
| MedNLI | Inference | 83.6 (0.7) |
| **Total** | Macro average of the scores |**83.8 (0.3)**|
## Code for Fine-tuning
We made the source code for fine-tuning freely available at [our repository](https://github.com/sy-wada/blue_benchmark_with_transformers).
## Citation
If you use our work in your research, please kindly cite the following paper:
```bibtex
@misc{2005.07202,
Author = {Shoya Wada and Toshihiro Takeda and Shiro Manabe and Shozo Konishi and Jun Kamohara and Yasushi Matsumura},
Title = {A pre-training technique to localize medical BERT and enhance BioBERT},
Year = {2020},
Eprint = {arXiv:2005.07202},
}
```
<a href="https://huggingface.co/exbert/?model=seiya/oubiobert-base-uncased&sentence=Coronavirus%20disease%20(COVID-19)%20is%20caused%20by%20SARS-COV2%20and%20represents%20the%20causative%20agent%20of%20a%20potentially%20fatal%20disease%20that%20is%20of%20great%20global%20public%20health%20concern.">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
sello-ralethe/roberta-base-generics-mlm | 709e5ec7f584c9129240352667c85e723d8815f5 | 2021-05-20T20:10:26.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | sello-ralethe | null | sello-ralethe/roberta-base-generics-mlm | 12 | null | transformers | 10,648 | Entry not found |
sentence-transformers/nli-distilbert-base-max-pooling | 9ce8088f2aa3325e07ef0f13ac79e2887213857a | 2022-06-16T00:49:26.000Z | [
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
]
| sentence-similarity | false | sentence-transformers | null | sentence-transformers/nli-distilbert-base-max-pooling | 12 | null | sentence-transformers | 10,649 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/nli-distilbert-base-max-pooling
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/nli-distilbert-base-max-pooling')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Max Pooling - Take the max value over time for every dimension.
def max_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
token_embeddings[input_mask_expanded == 0] = -1e9 # Set padding tokens to large negative value
return torch.max(token_embeddings, 1)[0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/nli-distilbert-base-max-pooling')
model = AutoModel.from_pretrained('sentence-transformers/nli-distilbert-base-max-pooling')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = max_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/nli-distilbert-base-max-pooling)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': True, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
shubh2014shiv/jp_review_sentiments_amzn | 63c259ce5070cf73ecff79c1d3808096bf56dd45 | 2021-11-06T14:18:29.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | shubh2014shiv | null | shubh2014shiv/jp_review_sentiments_amzn | 12 | null | transformers | 10,650 | # Steps to use this model
This model uses tokenizer 'rinna/japanese-roberta-base'. Therefore, below steps are critical to run the model correctly.
1. Create a local root directory on your system and new python environment.
2. Install below requirements
```
transformers==4.12.2
torch==1.10.0
numpy==1.21.3
pandas==1.3.4
sentencepiece==0.1.96
```
3. Go to link: "https://huggingface.co/spaces/shubh2014shiv/Japanese_NLP/tree/main" and download the fine tuned weights "reviewSentiments_jp.pt" in same local root directory.
4. Rename the downloaded weights as "reviewSentiments_jp.pt"
5. Use below code in the newly created environment.
```
from transformers import T5Tokenizer,BertForSequenceClassification
import torch
tokenizer = T5Tokenizer.from_pretrained('rinna/japanese-roberta-base')
japanese_review_text = "履きやすい。タイムセールで購入しました。見た目以上にカッコいいです。(^^)"
encoded_data = tokenizer.batch_encode_plus([japanese_review_text ],
add_special_tokens=True,
return_attention_mask=True,
padding=True,
max_length=200,
return_tensors='pt',
truncation=True)
input_ids = encoded_data['input_ids']
attention_masks = encoded_data['attention_mask']
model = BertForSequenceClassification.from_pretrained("shubh2014shiv/jp_review_sentiments_amzn",
num_labels=2,
output_attentions=False,
output_hidden_states=False)
model.load_state_dict(torch.load('reviewSentiments_jp.pt',map_location=torch.device('cpu')))
inputs = { 'input_ids': input_ids,
'attention_mask': attention_masks}
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
logits = logits.detach().cpu().numpy()
scores = 1 / (1 + np.exp(-1 * logits))
result = {"TEXT (文章)": jp_review_text,'NEGATIVE (ネガティブ)': scores[0][0], 'POSITIVE (ポジティブ)': scores[0][1]}
```
Output:
{'TEXT (文章)': '履きやすい。タイムセールで購入しました。見た目以上にカッコいいです。(^^)', 'NEGATIVE (ネガティブ)': 0.023672901, 'POSITIVE (ポジティブ)': 0.96819043} |
slider/simcse-chinese-roberta-wwm-ext | 987d39fd06fafa8bfc3b2dc809c142e81a038f74 | 2021-12-10T03:26:18.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | slider | null | slider/simcse-chinese-roberta-wwm-ext | 12 | 1 | transformers | 10,651 | Entry not found |
socialmediaie/TRAC2020_IBEN_B_bert-base-multilingual-uncased | c99643eba2430b5ed81cc05f49f059995552fa8f | 2021-05-20T07:04:58.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | socialmediaie | null | socialmediaie/TRAC2020_IBEN_B_bert-base-multilingual-uncased | 12 | null | transformers | 10,652 | # Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020
Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying.
Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752#
We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice.
Our approach is described in our paper titled:
> Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020
NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper.
If you plan to use the dataset please cite the following resources:
* Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
* Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1.
```
@inproceedings{Mishra2020TRAC,
author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)},
title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
year = {2020}
}
@data{illinoisdatabankIDB-8882752,
author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
doi = {10.13012/B2IDB-8882752_V1},
publisher = {University of Illinois at Urbana-Champaign},
title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1},
year = {2020}
}
```
## Usage
The models can be used via the following code:
```python
from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification
import torch
from pathlib import Path
from scipy.special import softmax
import numpy as np
import pandas as pd
TASK_LABEL_IDS = {
"Sub-task A": ["OAG", "NAG", "CAG"],
"Sub-task B": ["GEN", "NGEN"],
"Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"]
}
model_version="databank" # other option is hugging face library
if model_version == "databank":
# Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752
# Unzip the file at some model_path (we are using: "databank_model")
model_path = next(Path("databank_model").glob("./*/output/*/model"))
# Assuming you get the following type of structure inside "databank_model"
# 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model'
lang, task, _, base_model, _ = model_path.parts
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
else:
lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased"
base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(base_model)
# For doing inference set model in eval mode
model.eval()
# If you want to further fine-tune the model you can reset the model to model.train()
task_labels = TASK_LABEL_IDS[task]
sentence = "This is a good cat and this is a bad dog."
processed_sentence = f"{tokenizer.cls_token} {sentence}"
tokens = tokenizer.tokenize(sentence)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokens)
tokens_tensor = torch.tensor([indexed_tokens])
with torch.no_grad():
logits, = model(tokens_tensor, labels=None)
logits
preds = logits.detach().cpu().numpy()
preds_probs = softmax(preds, axis=1)
preds = np.argmax(preds_probs, axis=1)
preds_labels = np.array(task_labels)[preds]
print(dict(zip(task_labels, preds_probs[0])), preds_labels)
"""You should get an output as follows:
({'CAG-GEN': 0.06762535,
'CAG-NGEN': 0.03244293,
'NAG-GEN': 0.6897794,
'NAG-NGEN': 0.15498641,
'OAG-GEN': 0.034373745,
'OAG-NGEN': 0.020792078},
array(['NAG-GEN'], dtype='<U8'))
"""
``` |
soikit/chinese-bert-wwm-chinese_bert_wwm2 | 7c70bff0892479e336ad12714d0144f0a523d049 | 2021-10-20T16:49:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-generation",
"transformers"
]
| text-generation | false | soikit | null | soikit/chinese-bert-wwm-chinese_bert_wwm2 | 12 | null | transformers | 10,653 | Entry not found |
sosuke/ease-roberta-base | 28eb51f87096ed7e9c38b274c10ab77d656cf2c9 | 2021-12-29T08:04:13.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | false | sosuke | null | sosuke/ease-roberta-base | 12 | null | transformers | 10,654 | Entry not found |
spencerh/centerpartisan | 2c37b7a79b45517d0ac3c24cb324bcf3ca910c1d | 2021-04-23T20:44:08.000Z | [
"pytorch",
"tf",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | spencerh | null | spencerh/centerpartisan | 12 | null | transformers | 10,655 | Entry not found |
sshleifer/student_pegasus_xsum_16_4 | 031d3bf009727b7e0e488b7353253f9035736df1 | 2020-08-27T21:24:12.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | sshleifer | null | sshleifer/student_pegasus_xsum_16_4 | 12 | null | transformers | 10,656 | Entry not found |
sshleifer/t5-base-cnn | d23d8b32609b5ddcabc3a8288b7440dee0de479a | 2021-06-23T14:25:31.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | sshleifer | null | sshleifer/t5-base-cnn | 12 | null | transformers | 10,657 | Entry not found |
suwani/BERT_NER_Ep5-finetuned-ner | 1406ac38bcf29398efebe9368feb4aaff6f41ba8 | 2021-10-11T03:06:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | suwani | null | suwani/BERT_NER_Ep5-finetuned-ner | 12 | null | transformers | 10,658 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: BERT_NER_Ep5-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_NER_Ep5-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3553
- Precision: 0.6526
- Recall: 0.7248
- F1: 0.6868
- Accuracy: 0.9004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 288 | 0.3675 | 0.5906 | 0.5854 | 0.5880 | 0.8802 |
| 0.4803 | 2.0 | 576 | 0.3456 | 0.5863 | 0.7371 | 0.6531 | 0.8864 |
| 0.4803 | 3.0 | 864 | 0.3273 | 0.6478 | 0.7091 | 0.6771 | 0.8987 |
| 0.2233 | 4.0 | 1152 | 0.3441 | 0.6539 | 0.7226 | 0.6865 | 0.9001 |
| 0.2233 | 5.0 | 1440 | 0.3553 | 0.6526 | 0.7248 | 0.6868 | 0.9004 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
suwani/BERT_NER_Ep5_PAD_50-finetuned-ner | 06a9cc9b04c3c34a8f5930363a9623e85abc29f5 | 2021-10-27T13:13:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | suwani | null | suwani/BERT_NER_Ep5_PAD_50-finetuned-ner | 12 | null | transformers | 10,659 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: BERT_NER_Ep5_PAD_50-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_NER_Ep5_PAD_50-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3893
- Precision: 0.6540
- Recall: 0.7348
- F1: 0.6920
- Accuracy: 0.9006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 288 | 0.3705 | 0.5852 | 0.6215 | 0.6028 | 0.8793 |
| 0.4885 | 2.0 | 576 | 0.3351 | 0.5925 | 0.7317 | 0.6548 | 0.8865 |
| 0.4885 | 3.0 | 864 | 0.3196 | 0.6471 | 0.7138 | 0.6788 | 0.8994 |
| 0.2172 | 4.0 | 1152 | 0.3368 | 0.6454 | 0.7323 | 0.6861 | 0.8992 |
| 0.2172 | 5.0 | 1440 | 0.3491 | 0.6507 | 0.7312 | 0.6886 | 0.9008 |
| 0.1459 | 6.0 | 1728 | 0.3833 | 0.6715 | 0.7018 | 0.6863 | 0.9013 |
| 0.1045 | 7.0 | 2016 | 0.3893 | 0.6540 | 0.7348 | 0.6920 | 0.9006 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
suwani/BERT_NER_Ep6_PAD_50-finetuned-ner | 262d5e853661ab7da350c61b50e06c0442d23da7 | 2021-10-27T10:28:40.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | suwani | null | suwani/BERT_NER_Ep6_PAD_50-finetuned-ner | 12 | null | transformers | 10,660 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: BERT_NER_Ep6_PAD_50-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_NER_Ep6_PAD_50-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3741
- Precision: 0.6510
- Recall: 0.7399
- F1: 0.6926
- Accuracy: 0.9020
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 288 | 0.3648 | 0.5949 | 0.5907 | 0.5928 | 0.8792 |
| 0.4815 | 2.0 | 576 | 0.3400 | 0.5860 | 0.7390 | 0.6536 | 0.8867 |
| 0.4815 | 3.0 | 864 | 0.3217 | 0.6404 | 0.7129 | 0.6747 | 0.8992 |
| 0.2206 | 4.0 | 1152 | 0.3430 | 0.6413 | 0.7321 | 0.6837 | 0.8995 |
| 0.2206 | 5.0 | 1440 | 0.3560 | 0.6464 | 0.7377 | 0.6890 | 0.9010 |
| 0.1487 | 6.0 | 1728 | 0.3741 | 0.6510 | 0.7399 | 0.6926 | 0.9020 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
team-writing-assistant/t5-base-c4jfleg | 2a7832d6236f8f9fc7889f6276c90c5fa7131559 | 2021-11-19T11:57:03.000Z | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:1910.10683",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | team-writing-assistant | null | team-writing-assistant/t5-base-c4jfleg | 12 | 2 | transformers | 10,661 | # Model Description:
To create t5-base-c4jfleg model, T5-base model is fine-tuned on the [**JFLEG dataset**](https://huggingface.co/datasets/jfleg) and [**C4 200M dataset**](https://huggingface.co/datasets/liweili/c4_200m) by taking around 3000 examples from each with the objective of grammar correction.
The original Google's [**T5-base**] model was pre-trained on [**C4 dataset**](https://huggingface.co/datasets/c4).
The T5 model was presented in [**Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer**](https://arxiv.org/pdf/1910.10683.pdf) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
# Prefix:
The T-5 model use "grammar: " as the input text prefix for grammatical corrections.
## Usage :
```
from transformers import pipeline
checkpoint = "team-writing-assistant/t5-base-c4jfleg"
model = pipeline("text2text-generation", model=checkpoint)
text = "Speed of light is fastest then speed of sound"
text = "grammar: " + text
output = model(text)
print("Result: ", output[0]['generated_text'])
```
```
Result: Speed of light is faster than speed of sound.
```
## Other Examples :
Input: My grammar are bad.
Output: My grammar is bad.
Input: Who are the president?
Output: Who is the president? |
tesemnikov-av/rubert-ner-toxicity | c21271fd92a1f99b50c8d62a9b28585546169993 | 2022-02-08T12:52:32.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | tesemnikov-av | null | tesemnikov-av/rubert-ner-toxicity | 12 | null | transformers | 10,662 | ---
widget:
- text: "Ну ты и придурок!!"
---
NER Toxic models
Fine-tuning [cointegrated/rubert-tiny-toxicity](https://huggingface.co/cointegrated/rubert-tiny-toxicity) model on data from [toxic_dataset_ner](https://huggingface.co/datasets/tesemnikov-av/toxic_dataset_ner)
language: RU
```python
!pip install transformers > /dev/null
from transformers import (
AutoModelForTokenClassification,
AutoTokenizer,
pipeline
)
model = AutoModelForTokenClassification.from_pretrained('tesemnikov-av/rubert-ner-toxicity')
tokenizer = AutoTokenizer.from_pretrained('tesemnikov-av/rubert-ner-toxicity')
pipe = pipeline(model=model, tokenizer=tokenizer, task='ner', aggregation_strategy='average')
text = "Они охриневшие там все придурки!!"
print(text)
print(pipe(text))
```
|
thomwolf/codeparrot-small | f350f6111154ca2acbcf2851846da96fbc755a2d | 2021-07-27T22:19:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | thomwolf | null | thomwolf/codeparrot-small | 12 | null | transformers | 10,663 | Entry not found |
tugstugi/bert-large-mongolian-uncased | 6583581fdb3cd1daf61c76a0efdc8eb543340427 | 2021-05-20T08:19:28.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"mn",
"arxiv:1810.04805",
"transformers",
"mongolian",
"uncased",
"autotrain_compatible"
]
| fill-mask | false | tugstugi | null | tugstugi/bert-large-mongolian-uncased | 12 | 3 | transformers | 10,664 | ---
language: "mn"
tags:
- bert
- mongolian
- uncased
---
# BERT-LARGE-MONGOLIAN-UNCASED
[Link to Official Mongolian-BERT repo](https://github.com/tugstugi/mongolian-bert)
## Model description
This repository contains pre-trained Mongolian [BERT](https://arxiv.org/abs/1810.04805) models trained by [tugstugi](https://github.com/tugstugi), [enod](https://github.com/enod) and [sharavsambuu](https://github.com/sharavsambuu).
Special thanks to [nabar](https://github.com/nabar) who provided 5x TPUs.
This repository is based on the following open source projects: [google-research/bert](https://github.com/google-research/bert/),
[huggingface/pytorch-pretrained-BERT](https://github.com/huggingface/pytorch-pretrained-BERT) and [yoheikikuta/bert-japanese](https://github.com/yoheikikuta/bert-japanese).
#### How to use
```python
from transformers import pipeline, AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('tugstugi/bert-large-mongolian-uncased', use_fast=False)
model = AutoModelForMaskedLM.from_pretrained('tugstugi/bert-large-mongolian-uncased')
## declare task ##
pipe = pipeline(task="fill-mask", model=model, tokenizer=tokenizer)
## example ##
input_ = 'Монгол улсын [MASK] Улаанбаатар хотоос ярьж байна.'
output_ = pipe(input_)
for i in range(len(output_)):
print(output_[i])
## output ##
# {'sequence': 'монгол улсын нийслэл улаанбаатар хотоос ярьж байна.', 'score': 0.7867621183395386, 'token': 849, 'token_str': 'нийслэл'}
# {'sequence': 'монгол улсын ерөнхийлөгч улаанбаатар хотоос ярьж байна.', 'score': 0.14303277432918549, 'token': 244, 'token_str': 'ерөнхийлөгч'}
# {'sequence': 'монгол улсын ерөнхийлөгчийг улаанбаатар хотоос ярьж байна.', 'score': 0.011642335914075375, 'token': 8373, 'token_str': 'ерөнхийлөгчийг'}
# {'sequence': 'монгол улсын иргэд улаанбаатар хотоос ярьж байна.', 'score': 0.006592822726815939, 'token': 247, 'token_str': 'иргэд'}
# {'sequence': 'монгол улсын нийслэлийг улаанбаатар хотоос ярьж байна.', 'score': 0.006165097933262587, 'token': 15501, 'token_str': 'нийслэлийг'}
```
## Training data
Mongolian Wikipedia and the 700 million word Mongolian news data set [[Pretraining Procedure](https://github.com/tugstugi/mongolian-bert#pre-training)]
### BibTeX entry and citation info
```bibtex
@misc{mongolian-bert,
author = {Tuguldur, Erdene-Ochir and Gunchinish, Sharavsambuu and Bataa, Enkhbold},
title = {BERT Pretrained Models on Mongolian Datasets},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tugstugi/mongolian-bert/}}
}
```
|
xkang/distilbert-base-uncased-finetuned-imdb-whole-word-masking | 872600ba41cc8981670fabb6618bff8790cd1dfc | 2021-12-27T07:35:23.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | xkang | null | xkang/distilbert-base-uncased-finetuned-imdb-whole-word-masking | 12 | null | transformers | 10,665 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb-whole-word-masking
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb-whole-word-masking
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5536 | 1.0 | 157 | 3.3242 |
| 3.4026 | 2.0 | 314 | 3.2848 |
| 3.3708 | 3.0 | 471 | 3.2791 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
|
yhavinga/mt5-base-mixednews-nl | f05412c44b892bdc837d107904475afac49c71c4 | 2021-03-13T08:19:42.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"dutch",
"dataset:xsum_nl",
"transformers",
"summarization",
"autotrain_compatible"
]
| summarization | false | yhavinga | null | yhavinga/mt5-base-mixednews-nl | 12 | null | transformers | 10,666 | ---
tags:
- summarization
language:
- dutch
datasets:
- xsum_nl
widget:
- text: "Onderzoekers ontdekten dat vier van de vijf kinderen in Engeland die op school lunches hadden gegeten, op school voedsel hadden geprobeerd dat ze thuis niet hadden geprobeerd.De helft van de ondervraagde ouders zei dat hun kinderen hadden gevraagd om voedsel dat ze op school hadden gegeten om thuis te worden gekookt.De enquête, van ongeveer 1.000 ouders, vond dat de meest populaire groenten wortelen, suikermaïs en erwten waren.Aubergine, kikkererwten en spinazie waren een van de minst populaire.Van de ondervraagde ouders, 628 hadden kinderen die lunches op school aten. (% duidt op een deel van de ouders die zeiden dat hun kind elke groente zou eten) England's School Food Trust gaf opdracht tot het onderzoek na een onderzoek door de Mumsnet-website suggereerde dat sommige ouders hun kinderen lunchpakket gaven omdat ze dachten dat ze te kieskeurig waren om iets anders te eten. \"Schoolmaaltijden kunnen een geweldige manier zijn om ouders te helpen hun kinderen aan te moedigen om nieuw voedsel te proberen en om de verscheidenheid van voedsel in hun dieet te verhogen. \"Mumsnet medeoprichter, Carrie Longton, zei: \"Het krijgen van kinderen om gezond te eten is de droom van elke ouder, maar maaltijdtijden thuis kan vaak een slagveld en emotioneel geladen zijn. \"Vanuit Mumsnetters' ervaring lijkt het erop dat eenmaal op school is er een verlangen om in te passen bij iedereen anders en zelfs een aantal positieve peer pressure om op te scheppen over de verscheidenheid van wat voedsel je kunt eten. \"Schoolmaaltijden zijn ook verplaatst op nogal een beetje van toen Mumsnetters op school waren, met gezondere opties en meer afwisseling. \"Schoolmaaltijden in Engeland moeten nu voldoen aan strenge voedingsrichtlijnen.Ongeveer vier op de tien basisschoolkinderen in Engeland eten nu schoollunches, iets meer dan op middelbare scholen.Meer kinderen in Schotland eten schoollunches - ongeveer 46%.Het onderzoek werd online uitgevoerd tussen 26 februari en 5 maart onder een panel van ouders die ten minste één kind op school hadden van 4-17 jaar oud."
- text: "Het Londense trio staat klaar voor de beste Britse act en beste album, evenals voor twee nominaties in de beste song categorie. \"We kregen te horen zoals vanmorgen 'Oh I think you're genomineerd',\" zei Dappy. \"En ik was als 'Oh yeah, what one?' En nu zijn we genomineerd voor vier awards. Ik bedoel, wow! \"Bandmate Fazer voegde eraan toe: \"We dachten dat het het beste van ons was om met iedereen naar beneden te komen en hallo te zeggen tegen de camera's.En nu vinden we dat we vier nominaties hebben. \"De band heeft twee shots bij de beste song prijs, het krijgen van het knikje voor hun Tyncy Stryder samenwerking nummer één, en single Strong Again.Their album Uncle B zal ook gaan tegen platen van Beyonce en Kany \"Aan het eind van de dag zijn we dankbaar om te zijn waar we zijn in onze carrières. \"Als het niet gebeurt dan gebeurt het niet - live om te vechten een andere dag en blijven maken albums en hits voor de fans. \"Dappy onthulde ook dat ze kunnen worden optreden live op de avond.De groep zal doen Nummer Een en ook een mogelijke uitlevering van de War Child single, I Got Soul.Het liefdadigheidslied is een re-working van The Killers' All These Things That I've Done en is ingesteld op artiesten als Chipmunk, Ironik en Pixie Lott.Dit jaar zal Mobos worden gehouden buiten Londen voor de eerste keer, in Glasgow op 30 september.N-Dubz zei dat ze op zoek waren naar optredens voor hun Schotse fans en bogen over hun recente shows ten noorden van de Londense We hebben Aberdeen ongeveer drie of vier maanden geleden gedaan - we hebben die show daar verbrijzeld! Overal waar we heen gaan slaan we hem in elkaar!\""
---
# mt5-base-mixednews-nl
mt5-base finetuned on three mixed news sources:
1. CNN DM translated to Dutch with MarianMT.
2. XSUM translated to Dutch with MarianMt.
3. News article summaries distilled from the nu.nl website.
Config:
* Learning rate 1e-3
* Trained for one epoch
* Max source length 1024
* Max target length 142
* Min target length 75
Scores:
* rouge1 28.8482
* rouge2 9.4584
* rougeL 20.1697
|
yigitbekir/turkish-bert-uncased-sentiment | 39c2ac210059db0249fa3fd7893bffad9f577a76 | 2021-05-20T09:29:34.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | yigitbekir | null | yigitbekir/turkish-bert-uncased-sentiment | 12 | null | transformers | 10,667 | Entry not found |
yongzx/gpt2-finetuned-oscar-de | e66c8ee26fcc7bdea851c3135f8163a2e1b8639e | 2021-12-09T16:44:10.000Z | [
"pytorch",
"gpt2",
"feature-extraction",
"de",
"dataset:oscar",
"transformers",
"text-generation",
"license:mit"
]
| feature-extraction | false | yongzx | null | yongzx/gpt2-finetuned-oscar-de | 12 | null | transformers | 10,668 | ---
language:
- de
tags:
- text-generation
license: mit
datasets:
- oscar
widget:
- text: "Mein Name ist Anna. Ich komme aus Österreich und "
---
# GPT-2 finetuned on German Dataset
### Tokenizer
We first trained a tokenizer on OSCAR's `unshuffled_original_de` German data subset by following the training of GPT2 tokenizer (same vocab size of 50,257). Here's the [Python file](https://github.com/bigscience-workshop/multilingual-modeling/blob/gpt2-ko/experiments/exp-001/train_tokenizer_gpt2.py) for the training.
### Model
We finetuned the `wte` and `wpe` layers of GPT-2 (while freezing the parameters of all other layers) on OSCAR's `unshuffled_original_de` German data subset. We used [Huggingface's code](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py) for fine-tuning the causal language model GPT-2, but with the following parameters changed
```
- preprocessing_num_workers: 8
- per_device_train_batch_size: 2
- gradient_accumulation_steps: 4
- per_device_eval_batch_size: 2
- eval_accumulation_steps: 4
- eval_steps: 1000
- evaluation_strategy: "steps"
- max_eval_samples: 5000
```
**Training details**: total training steps: 457000, effective train batch size per step: 32, max tokens per batch: 1024)
**Final checkpoint**: checkpoint-457000 |
yoshitomo-matsubara/bert-large-uncased-mnli | 2c9bb0f160f5d4cf405348abcb9d46342132e926 | 2021-05-29T21:32:31.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:mnli",
"dataset:ax",
"transformers",
"mnli",
"ax",
"glue",
"torchdistill",
"license:apache-2.0"
]
| text-classification | false | yoshitomo-matsubara | null | yoshitomo-matsubara/bert-large-uncased-mnli | 12 | null | transformers | 10,669 | ---
language: en
tags:
- bert
- mnli
- ax
- glue
- torchdistill
license: apache-2.0
datasets:
- mnli
- ax
metrics:
- accuracy
---
`bert-large-uncased` fine-tuned on MNLI dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/mnli/ce/bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
|
wietsedv/xlm-roberta-base-ft-udpos28-ar | fc4e7b640067f7e5db7e0be233d650dd3628719e | 2022-02-25T09:58:02.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"ar",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-ar | 12 | null | transformers | 10,670 |
---
language:
- ar
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-ar
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 62.8
- type: accuracy
name: Dutch Test accuracy
value: 63.5
- type: accuracy
name: German Test accuracy
value: 63.8
- type: accuracy
name: Italian Test accuracy
value: 60.2
- type: accuracy
name: French Test accuracy
value: 58.5
- type: accuracy
name: Spanish Test accuracy
value: 64.9
- type: accuracy
name: Russian Test accuracy
value: 77.2
- type: accuracy
name: Swedish Test accuracy
value: 68.5
- type: accuracy
name: Norwegian Test accuracy
value: 64.6
- type: accuracy
name: Danish Test accuracy
value: 66.1
- type: accuracy
name: Low Saxon Test accuracy
value: 28.0
- type: accuracy
name: Akkadian Test accuracy
value: 3.9
- type: accuracy
name: Armenian Test accuracy
value: 69.4
- type: accuracy
name: Welsh Test accuracy
value: 58.8
- type: accuracy
name: Old East Slavic Test accuracy
value: 55.6
- type: accuracy
name: Albanian Test accuracy
value: 68.1
- type: accuracy
name: Slovenian Test accuracy
value: 64.7
- type: accuracy
name: Guajajara Test accuracy
value: 15.0
- type: accuracy
name: Kurmanji Test accuracy
value: 59.1
- type: accuracy
name: Turkish Test accuracy
value: 62.4
- type: accuracy
name: Finnish Test accuracy
value: 66.9
- type: accuracy
name: Indonesian Test accuracy
value: 66.3
- type: accuracy
name: Ukrainian Test accuracy
value: 77.7
- type: accuracy
name: Polish Test accuracy
value: 77.0
- type: accuracy
name: Portuguese Test accuracy
value: 66.5
- type: accuracy
name: Kazakh Test accuracy
value: 68.1
- type: accuracy
name: Latin Test accuracy
value: 60.9
- type: accuracy
name: Old French Test accuracy
value: 25.6
- type: accuracy
name: Buryat Test accuracy
value: 33.6
- type: accuracy
name: Kaapor Test accuracy
value: 2.5
- type: accuracy
name: Korean Test accuracy
value: 52.0
- type: accuracy
name: Estonian Test accuracy
value: 66.5
- type: accuracy
name: Croatian Test accuracy
value: 73.3
- type: accuracy
name: Gothic Test accuracy
value: 7.2
- type: accuracy
name: Swiss German Test accuracy
value: 30.4
- type: accuracy
name: Assyrian Test accuracy
value: 14.6
- type: accuracy
name: North Sami Test accuracy
value: 19.2
- type: accuracy
name: Naija Test accuracy
value: 26.6
- type: accuracy
name: Latvian Test accuracy
value: 69.9
- type: accuracy
name: Chinese Test accuracy
value: 30.3
- type: accuracy
name: Tagalog Test accuracy
value: 55.1
- type: accuracy
name: Bambara Test accuracy
value: 15.7
- type: accuracy
name: Lithuanian Test accuracy
value: 73.0
- type: accuracy
name: Galician Test accuracy
value: 67.5
- type: accuracy
name: Vietnamese Test accuracy
value: 60.7
- type: accuracy
name: Greek Test accuracy
value: 64.7
- type: accuracy
name: Catalan Test accuracy
value: 60.5
- type: accuracy
name: Czech Test accuracy
value: 75.4
- type: accuracy
name: Erzya Test accuracy
value: 27.3
- type: accuracy
name: Bhojpuri Test accuracy
value: 40.9
- type: accuracy
name: Thai Test accuracy
value: 53.7
- type: accuracy
name: Marathi Test accuracy
value: 68.7
- type: accuracy
name: Basque Test accuracy
value: 59.4
- type: accuracy
name: Slovak Test accuracy
value: 74.7
- type: accuracy
name: Kiche Test accuracy
value: 19.0
- type: accuracy
name: Yoruba Test accuracy
value: 14.9
- type: accuracy
name: Warlpiri Test accuracy
value: 18.6
- type: accuracy
name: Tamil Test accuracy
value: 63.0
- type: accuracy
name: Maltese Test accuracy
value: 15.1
- type: accuracy
name: Ancient Greek Test accuracy
value: 41.1
- type: accuracy
name: Icelandic Test accuracy
value: 61.6
- type: accuracy
name: Mbya Guarani Test accuracy
value: 20.3
- type: accuracy
name: Urdu Test accuracy
value: 57.4
- type: accuracy
name: Romanian Test accuracy
value: 68.4
- type: accuracy
name: Persian Test accuracy
value: 76.1
- type: accuracy
name: Apurina Test accuracy
value: 22.4
- type: accuracy
name: Japanese Test accuracy
value: 17.9
- type: accuracy
name: Hungarian Test accuracy
value: 61.1
- type: accuracy
name: Hindi Test accuracy
value: 64.1
- type: accuracy
name: Classical Chinese Test accuracy
value: 5.6
- type: accuracy
name: Komi Permyak Test accuracy
value: 30.9
- type: accuracy
name: Faroese Test accuracy
value: 54.4
- type: accuracy
name: Sanskrit Test accuracy
value: 4.9
- type: accuracy
name: Livvi Test accuracy
value: 40.3
- type: accuracy
name: Arabic Test accuracy
value: 75.9
- type: accuracy
name: Wolof Test accuracy
value: 14.6
- type: accuracy
name: Bulgarian Test accuracy
value: 75.3
- type: accuracy
name: Akuntsu Test accuracy
value: 10.5
- type: accuracy
name: Makurap Test accuracy
value: 2.1
- type: accuracy
name: Kangri Test accuracy
value: 29.2
- type: accuracy
name: Breton Test accuracy
value: 39.1
- type: accuracy
name: Telugu Test accuracy
value: 63.2
- type: accuracy
name: Cantonese Test accuracy
value: 30.1
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 27.7
- type: accuracy
name: Karelian Test accuracy
value: 44.2
- type: accuracy
name: Upper Sorbian Test accuracy
value: 54.6
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 58.8
- type: accuracy
name: Komi Zyrian Test accuracy
value: 28.7
- type: accuracy
name: Irish Test accuracy
value: 51.4
- type: accuracy
name: Nayini Test accuracy
value: 26.9
- type: accuracy
name: Munduruku Test accuracy
value: 7.0
- type: accuracy
name: Manx Test accuracy
value: 18.3
- type: accuracy
name: Skolt Sami Test accuracy
value: 25.9
- type: accuracy
name: Afrikaans Test accuracy
value: 62.5
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 18.3
- type: accuracy
name: Belarusian Test accuracy
value: 77.2
- type: accuracy
name: Serbian Test accuracy
value: 73.7
- type: accuracy
name: Moksha Test accuracy
value: 26.2
- type: accuracy
name: Western Armenian Test accuracy
value: 58.5
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 40.4
- type: accuracy
name: Khunsari Test accuracy
value: 29.7
- type: accuracy
name: Hebrew Test accuracy
value: 77.1
- type: accuracy
name: Uyghur Test accuracy
value: 56.2
- type: accuracy
name: Chukchi Test accuracy
value: 27.5
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Arabic
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ar")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ar")
```
|
saptarshidatta96/finetuning-sentiment-model-3000-samples | 5cf9bbeaa64d950d8b9a7ca397bdd66d93525658 | 2022-02-25T15:20:10.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | saptarshidatta96 | null | saptarshidatta96/finetuning-sentiment-model-3000-samples | 12 | null | transformers | 10,671 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.879746835443038
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3209
- Accuracy: 0.8733
- F1: 0.8797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
inovex/multi2convai-logistics-en-bert | 85f98ab937bfd02e29a7e28e5d57bb4765152862 | 2022-03-01T08:53:59.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"transformers",
"license:mit"
]
| text-classification | false | inovex | null | inovex/multi2convai-logistics-en-bert | 12 | null | transformers | 10,672 | ---
tags:
- text-classification
widget:
- text: "Where can I put the parcel?"
license: mit
language: en
---
# Multi2ConvAI-Logistics: finetuned Bert for English
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: English (en)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-en-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-en-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
inovex/multi2convai-quality-de-bert | 969f8fb42109e842afe13bdb50d09c72b8e0bbb5 | 2022-03-01T09:00:15.000Z | [
"pytorch",
"bert",
"text-classification",
"de",
"transformers",
"license:mit"
]
| text-classification | false | inovex | null | inovex/multi2convai-quality-de-bert | 12 | null | transformers | 10,673 | ---
tags:
- text-classification
widget:
- text: "Starte das Programm"
license: mit
language: de
---
# Multi2ConvAI-Quality: finetuned Bert for German
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: German (de)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-de-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-de-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
inovex/multi2convai-quality-it-mbert | b220b01a2efa5cfed2436ca57e4c4bf54d54b4cd | 2022-03-01T09:02:26.000Z | [
"pytorch",
"bert",
"text-classification",
"it",
"transformers",
"license:mit"
]
| text-classification | false | inovex | null | inovex/multi2convai-quality-it-mbert | 12 | null | transformers | 10,674 | ---
tags:
- text-classification
widget:
- text: "Avviare il programma"
license: mit
language: it
---
# Multi2ConvAI-Quality: finetuned MBert for Italian
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Italian (it)
- model type: finetuned MBert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-it-mbert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-it-mbert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
ghadeermobasher/BC4_Original-BiomedNLP-PubMedBERT-base-uncased-abstract | e32328fa391e1eb3b937f91c230dab8683d97f8b | 2022-03-03T14:45:58.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4_Original-BiomedNLP-PubMedBERT-base-uncased-abstract | 12 | null | transformers | 10,675 | Entry not found |
ghadeermobasher/BC4_Modified_BiomedNLP-PubMedBERT-base-uncased-abstract | 3e502f9f2579f4c4108aae7ed4e5253d95d9b232 | 2022-02-25T21:18:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4_Modified_BiomedNLP-PubMedBERT-base-uncased-abstract | 12 | null | transformers | 10,676 | Entry not found |
anas-awadalla/spanbert-base-cased-few-shot-k-32-finetuned-squad-seed-4 | 7b4c0ed9bd398f81a00569d8ada5f4e109f5fdd6 | 2022-02-25T21:12:44.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-32-finetuned-squad-seed-4 | 12 | null | transformers | 10,677 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-32-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-32-finetuned-squad-seed-4
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
nsi319/xlnet-base-cased-finetuned-app | 11a6dae1231e2505c687c4e91c40781036bf0cdd | 2022-02-27T10:52:49.000Z | [
"pytorch",
"xlnet",
"text-classification",
"en",
"transformers",
"mobile app descriptions",
"playstore",
"license:mit"
]
| text-classification | false | nsi319 | null | nsi319/xlnet-base-cased-finetuned-app | 12 | null | transformers | 10,678 | ---
language: "en"
thumbnail: "https://huggingface.co/nsi319"
tags:
- xlnet
- pytorch
- text-classification
- mobile app descriptions
- playstore
license: "mit"
inference: true
---
# Mobile App Classification
## Model description
XLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs Transformer-XL as the backbone model, exhibiting excellent performance for language tasks involving long context.
The [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) model is fine-tuned to classify an mobile app description into one of **6 play store categories**.
Trained on 9000 samples of English App Descriptions and associated categories of apps available in [Google Play](https://play.google.com/store/apps).
## Fine-tuning
The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 2e-05, and a maximum sequence length of 512. Since this was a classification task, the model was trained with a cross-entropy loss function. The best evaluation f1 score achieved by the model was 0.8951433611497919, found after 5 epochs. The accuracy of the model on the test set was 0.895.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("nsi319/xlnet-base-cased-finetuned-app")
model = AutoModelForSequenceClassification.from_pretrained("nsi319/xlnet-base-cased-finetuned-app")
classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
classifier("The official Google Photos app is made for the way you take photos today and includes essential features like shared albums, automatic creations and an advanced editing suite. Additionally every Google Account comes with 15 GB of free storage and you can choose to automatically back up all your photos and videos in High quality or Original quality. You can then access them from any connected device and on photos.google.com.")
'''Output'''
[{'label': 'Photography', 'score': 0.998849630355835}]
```
## Limitations
Training data consists of apps from 6 play store categories namely Education, Entertainment, Productivity, Sports, News & Magazines and Photography.
|
asini/wav2vec2-timit-demo | a076c094708a22f392e286d8aee7ff7dcda35f0a | 2022-03-01T10:37:06.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | asini | null | asini/wav2vec2-timit-demo | 12 | null | transformers | 10,679 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-timit-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-timit-demo
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4847
- Wer: 0.3462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.487 | 4.0 | 500 | 1.3466 | 1.0153 |
| 0.6134 | 8.0 | 1000 | 0.4807 | 0.4538 |
| 0.2214 | 12.0 | 1500 | 0.4684 | 0.3984 |
| 0.1233 | 16.0 | 2000 | 0.5070 | 0.3779 |
| 0.0847 | 20.0 | 2500 | 0.4965 | 0.3705 |
| 0.0611 | 24.0 | 3000 | 0.4881 | 0.3535 |
| 0.0464 | 28.0 | 3500 | 0.4847 | 0.3462 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Andrey1989/mbert-finetuned-ner | a60a40c0f4842458f777c5a1a13f53c4d36174b2 | 2022-06-13T19:46:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Andrey1989 | null | Andrey1989/mbert-finetuned-ner | 12 | null | transformers | 10,680 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: mbert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: lv
metrics:
- name: Precision
type: precision
value: 0.9304986338797814
- name: Recall
type: recall
value: 0.9375430144528561
- name: F1
type: f1
value: 0.9340075419952005
- name: Accuracy
type: accuracy
value: 0.9699674740348558
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert-finetuned-ner
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1264
- Precision: 0.9305
- Recall: 0.9375
- F1: 0.9340
- Accuracy: 0.9700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.301 | 1.0 | 625 | 0.1756 | 0.8843 | 0.9067 | 0.8953 | 0.9500 |
| 0.1259 | 2.0 | 1250 | 0.1248 | 0.9285 | 0.9335 | 0.9310 | 0.9688 |
| 0.0895 | 3.0 | 1875 | 0.1264 | 0.9305 | 0.9375 | 0.9340 | 0.9700 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
batterydata/batterybert-uncased-squad-v1 | 5cf7334ad5096f21556380873c5a806cf445b806 | 2022-03-05T13:52:33.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"transformers",
"question answering",
"license:apache-2.0",
"autotrain_compatible"
]
| question-answering | false | batterydata | null | batterydata/batterybert-uncased-squad-v1 | 12 | null | transformers | 10,681 | ---
language: en
tags: question answering
license: apache-2.0
datasets:
- squad
- batterydata/battery-device-data-qa
metrics: squad
---
# BatteryBERT-uncased for QA
**Language model:** batterybert-uncased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "batterybert-uncased"
max_seq_len = 386
learning_rate = 3e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 81.08,
"f1": 88.41,
```
Evaluated on the battery device dataset.
```
"precision": 68.27,
"recall": 80.88,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/batterybert-uncased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
batterydata/bert-base-uncased-abstract | 383638f165004b6c8c2f3fdb3d1d2ce794b8b0b5 | 2022-03-05T14:44:13.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:batterydata/paper-abstracts",
"transformers",
"Text Classification",
"license:apache-2.0"
]
| text-classification | false | batterydata | null | batterydata/bert-base-uncased-abstract | 12 | null | transformers | 10,682 | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BERT-base-uncased for Battery Abstract Classification
**Language model:** bert-base-uncased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 13
base_LM_model = "bert-base-uncased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 96.79,
"Test accuracy": 96.29,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/bert-base-uncased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement |
cnu/distilbert-base-uncased-finetuned-cola | 3390b50b51f566b9bb7e9e6059688b9e92b83e40 | 2022-03-02T07:30:35.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | cnu | null | cnu/distilbert-base-uncased-finetuned-cola | 12 | null | transformers | 10,683 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5474713423103301
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8651
- Matthews Correlation: 0.5475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5233 | 1.0 | 535 | 0.5353 | 0.4004 |
| 0.3497 | 2.0 | 1070 | 0.5165 | 0.5076 |
| 0.2386 | 3.0 | 1605 | 0.6661 | 0.5161 |
| 0.1745 | 4.0 | 2140 | 0.7730 | 0.5406 |
| 0.1268 | 5.0 | 2675 | 0.8651 | 0.5475 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.6
|
clisi2000/distilbert-base-uncased-finetuned-emotion | 3caab60c0f4e263855d0dafa37419e9a7d5b94c9 | 2022-03-06T07:09:00.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | clisi2000 | null | clisi2000/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,684 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9246284188099615
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2183
- Accuracy: 0.9245
- F1: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8174 | 1.0 | 250 | 0.3166 | 0.905 | 0.9023 |
| 0.2534 | 2.0 | 500 | 0.2183 | 0.9245 | 0.9246 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2+cpu
- Datasets 1.16.1
- Tokenizers 0.10.1
|
ttmusic/distilbert-base-uncased-finetuned-imdb | 9f2aa94ccde5cc450648bc578e9157fe6b92b752 | 2022-03-06T01:28:38.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | ttmusic | null | ttmusic/distilbert-base-uncased-finetuned-imdb | 12 | null | transformers | 10,685 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 79 | 2.5347 |
| 2.6681 | 2.0 | 158 | 2.4416 |
| 2.6681 | 3.0 | 237 | 2.4634 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.6
|
bishnu/finetuning-sentiment-model-3000-samples | 0ad49b15cca93b9ca27ca681cc2eec49576e8764 | 2022-03-09T17:05:15.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | bishnu | null | bishnu/finetuning-sentiment-model-3000-samples | 12 | null | transformers | 10,686 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.86
- name: F1
type: f1
value: 0.8556701030927835
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5523
- Accuracy: 0.86
- F1: 0.8557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
spy24/autonlp-optimized-paraphrasing-615217541 | 7d402f22bfd7b781ca1fb020554a95182ad47f79 | 2022-03-07T08:56:14.000Z | [
"pytorch",
"t5",
"text2text-generation",
"unk",
"dataset:spy24/autonlp-data-optimized-paraphrasing",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
]
| text2text-generation | false | spy24 | null | spy24/autonlp-optimized-paraphrasing-615217541 | 12 | null | transformers | 10,687 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- spy24/autonlp-data-optimized-paraphrasing
co2_eq_emissions: 1.166696812121839
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 615217541
- CO2 Emissions (in grams): 1.166696812121839
## Validation Metrics
- Loss: 0.00019549368880689144
- Rouge1: 100.0
- Rouge2: 51.4451
- RougeL: 100.0
- RougeLsum: 100.0
- Gen Len: 4.104
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/spy24/autonlp-optimized-paraphrasing-615217541
``` |
abhishek/autonlp-swahili-sentiment-615517563 | e66110eb541d862b2d257254b5dea87757f168fb | 2022-03-07T12:54:03.000Z | [
"pytorch",
"bert",
"text-classification",
"unk",
"dataset:abhishek/autonlp-data-swahili-sentiment",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | abhishek | null | abhishek/autonlp-swahili-sentiment-615517563 | 12 | null | transformers | 10,688 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- abhishek/autonlp-data-swahili-sentiment
co2_eq_emissions: 1.9057858628956459
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 615517563
- CO2 Emissions (in grams): 1.9057858628956459
## Validation Metrics
- Loss: 0.6990908980369568
- Accuracy: 0.695364238410596
- Macro F1: 0.6088819062581828
- Micro F1: 0.695364238410596
- Weighted F1: 0.677326207350606
- Macro Precision: 0.6945099492363175
- Micro Precision: 0.695364238410596
- Weighted Precision: 0.6938596845881614
- Macro Recall: 0.5738408020723632
- Micro Recall: 0.695364238410596
- Weighted Recall: 0.695364238410596
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-swahili-sentiment-615517563
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-swahili-sentiment-615517563", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-swahili-sentiment-615517563", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
zdepablo/distilbert-base-uncased-finetuned-emotion | 14b8eecb0c52f0a6435a32f675f9154354ed78d9 | 2022-03-09T23:04:59.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | zdepablo | null | zdepablo/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,689 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9241594821961092
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2311
- Accuracy: 0.924
- F1: 0.9242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8868 | 1.0 | 250 | 0.3435 | 0.9005 | 0.8980 |
| 0.2686 | 2.0 | 500 | 0.2311 | 0.924 | 0.9242 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Kaveh8/autonlp-imdb_rating-625417974 | 5670bb192c112974e4047d211228c29c1906db16 | 2022-03-10T13:20:41.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:Kaveh8/autonlp-data-imdb_rating",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | Kaveh8 | null | Kaveh8/autonlp-imdb_rating-625417974 | 12 | null | transformers | 10,690 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Kaveh8/autonlp-data-imdb_rating
co2_eq_emissions: 0.7952957276830314
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 625417974
- CO2 Emissions (in grams): 0.7952957276830314
## Validation Metrics
- Loss: 1.0167548656463623
- Accuracy: 0.5934065934065934
- Macro F1: 0.5871237509176406
- Micro F1: 0.5934065934065934
- Weighted F1: 0.5905118014752566
- Macro Precision: 0.5959908336094294
- Micro Precision: 0.5934065934065934
- Weighted Precision: 0.5979368174068634
- Macro Recall: 0.5884714803600252
- Micro Recall: 0.5934065934065934
- Weighted Recall: 0.5934065934065934
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Kaveh8/autonlp-imdb_rating-625417974
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Kaveh8/autonlp-imdb_rating-625417974", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Kaveh8/autonlp-imdb_rating-625417974", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
haddadalwi/bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-islamic-squad | 1f444815eb2e009edf195c6d98fecdce594459c8 | 2022-03-28T05:04:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | haddadalwi | null | haddadalwi/bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-islamic-squad | 12 | null | transformers | 10,691 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-islamic-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-islamic-squad
This model is a fine-tuned version of [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 40 | 0.4082 |
| No log | 2.0 | 80 | 0.3855 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
cloudblack/bert-base-finetuned-sts | 712f7c3f93b6b4c4c7453639d4ab8b927586d4e3 | 2022-03-13T11:13:45.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | cloudblack | null | cloudblack/bert-base-finetuned-sts | 12 | null | transformers | 10,692 | Entry not found |
anwesham/mbert_hi_ur | e8e2905183d1e248e172b1dba6b6c489c8e9f59d | 2022-03-13T02:36:43.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | anwesham | null | anwesham/mbert_hi_ur | 12 | null | transformers | 10,693 | Entry not found |
clapika2010/flights_finetuned | 5ca4dc9495a0882fb748b2cf2584e6b0ff4ad2ae | 2022-03-12T07:46:54.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | clapika2010 | null | clapika2010/flights_finetuned | 12 | null | transformers | 10,694 | Entry not found |
RobertoMCA97/distilbert-base-uncased-finetuned-emotion | b8bf3e877355e17b6a9b03d5b1f8ca5e01457c6b | 2022-03-12T17:11:44.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | RobertoMCA97 | null | RobertoMCA97/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,695 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9257511693451751
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2157
- Accuracy: 0.9255
- F1: 0.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8145 | 1.0 | 250 | 0.3093 | 0.91 | 0.9081 |
| 0.2461 | 2.0 | 500 | 0.2157 | 0.9255 | 0.9258 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Ramu/distilbert-base-uncased-finetuned-emotion | 4ea7758319c1416db8e70c5d32cf3a277d368441 | 2022-03-13T14:27:54.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Ramu | null | Ramu/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,696 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9262005126757141
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2167
- Accuracy: 0.926
- F1: 0.9262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8112 | 1.0 | 250 | 0.3147 | 0.903 | 0.8992 |
| 0.2454 | 2.0 | 500 | 0.2167 | 0.926 | 0.9262 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.8.1+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aGabillon/distilbert-base-uncased-finetuned-emotion | c9909c051291b19611466538f34468c84865c715 | 2022-03-13T04:19:27.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aGabillon | null | aGabillon/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,697 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.921871942661868
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2294
- Accuracy: 0.9215
- F1: 0.9219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8304 | 1.0 | 250 | 0.3312 | 0.899 | 0.8962 |
| 0.2547 | 2.0 | 500 | 0.2294 | 0.9215 | 0.9219 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
alexhf90/Clasificacion_sentimientos | 15549210e7ab5a218e13a67ff6047c4b262b0148 | 2022-03-15T22:20:11.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | alexhf90 | null | alexhf90/Clasificacion_sentimientos | 12 | 1 | transformers | 10,698 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Clasificacion_sentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Clasificacion_sentimientos
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3399
- Accuracy: 0.9428
## Model description
Se entrena un modelo que es capaz de clasificar si es un comentario postivo o negativo.
## Intended uses & limitations
More information needed
## Training and evaluation data
Se entrenó el modelo usando comentarios de peliculas de la página $https://www.filmaffinity.com/es/main.html$
- Estos comentarios estan en la base de datos alojada en Kaggle,
url : https://www.kaggle.com/ricardomoya/criticas-peliculas-filmaffinity-en-espaniol/code
## Training procedure
La variable review_rate se usó para clasificar los comentarios positivos y negativos así:
Positivos: los rating con 8,9,10.
Negativos: Los rating con 3,2,1.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2566 | 1.0 | 901 | 0.5299 | 0.8935 |
| 0.0963 | 2.0 | 1802 | 0.2885 | 0.9383 |
| 0.0133 | 3.0 | 2703 | 0.3546 | 0.9406 |
| 0.0002 | 4.0 | 3604 | 0.3399 | 0.9428 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anton-l/xtreme_s_xlsr_300m_mls | e549e826c377de13e756208ce95e6971465078a7 | 2022-04-03T18:54:35.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:google/xtreme_s",
"transformers",
"google/xtreme_s",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | anton-l | null | anton-l/xtreme_s_xlsr_300m_mls | 12 | 1 | transformers | 10,699 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- google/xtreme_s
- generated_from_trainer
datasets:
- google/xtreme_s
model-index:
- name: xtreme_s_xlsr_mls
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_300m_mls
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - MLS dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6215
- Wer: 0.3033
- Cer: 0.0951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 3000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 3.0446 | 1.91 | 500 | 2.9866 | 1.0 | 1.0 |
| 0.8789 | 3.82 | 1000 | 0.8574 | 0.7225 | 0.2355 |
| 0.4766 | 5.72 | 1500 | 0.4813 | 0.4624 | 0.1394 |
| 0.3779 | 7.63 | 2000 | 0.4465 | 0.4154 | 0.1309 |
| 0.3244 | 9.54 | 2500 | 0.4213 | 0.3683 | 0.1163 |
| 0.346 | 11.45 | 3000 | 0.4606 | 0.4033 | 0.1299 |
| 0.3092 | 13.36 | 3500 | 0.4160 | 0.3585 | 0.1115 |
| 0.3287 | 15.27 | 4000 | 0.4364 | 0.3631 | 0.1165 |
| 0.3165 | 17.18 | 4500 | 0.4218 | 0.3451 | 0.1056 |
| 0.2874 | 19.08 | 5000 | 0.4583 | 0.3650 | 0.1151 |
| 0.3089 | 20.99 | 5500 | 0.4424 | 0.3485 | 0.1137 |
| 0.2689 | 22.9 | 6000 | 0.4427 | 0.3542 | 0.1128 |
| 0.234 | 24.81 | 6500 | 0.4204 | 0.3431 | 0.1069 |
| 0.2363 | 26.72 | 7000 | 0.4792 | 0.3689 | 0.1191 |
| 0.2796 | 28.62 | 7500 | 0.4867 | 0.3662 | 0.1154 |
| 0.2447 | 30.53 | 8000 | 0.4908 | 0.3584 | 0.1160 |
| 0.22 | 32.44 | 8500 | 0.5315 | 0.3626 | 0.1240 |
| 0.1961 | 34.35 | 9000 | 0.5121 | 0.3610 | 0.1168 |
| 0.1959 | 36.26 | 9500 | 0.5140 | 0.3648 | 0.1179 |
| 0.1748 | 38.17 | 10000 | 0.5464 | 0.3763 | 0.1206 |
| 0.197 | 40.08 | 10500 | 0.5199 | 0.3515 | 0.1128 |
| 0.2166 | 41.98 | 11000 | 0.5336 | 0.3607 | 0.1191 |
| 0.2078 | 43.89 | 11500 | 0.5389 | 0.3518 | 0.1136 |
| 0.1827 | 45.8 | 12000 | 0.5014 | 0.3287 | 0.1053 |
| 0.1783 | 47.71 | 12500 | 0.5408 | 0.3545 | 0.1121 |
| 0.1489 | 49.62 | 13000 | 0.5292 | 0.3472 | 0.1098 |
| 0.1665 | 51.53 | 13500 | 0.5052 | 0.3300 | 0.1033 |
| 0.1631 | 53.43 | 14000 | 0.5241 | 0.3362 | 0.1081 |
| 0.1943 | 55.34 | 14500 | 0.5453 | 0.3373 | 0.1076 |
| 0.1504 | 57.25 | 15000 | 0.5958 | 0.3594 | 0.1149 |
| 0.136 | 59.16 | 15500 | 0.5645 | 0.3367 | 0.1082 |
| 0.1224 | 61.07 | 16000 | 0.5322 | 0.3302 | 0.1039 |
| 0.1156 | 62.98 | 16500 | 0.5728 | 0.3332 | 0.1061 |
| 0.114 | 64.88 | 17000 | 0.5994 | 0.3410 | 0.1125 |
| 0.1445 | 66.79 | 17500 | 0.6048 | 0.3471 | 0.1098 |
| 0.1281 | 68.7 | 18000 | 0.5747 | 0.3278 | 0.1042 |
| 0.1233 | 70.61 | 18500 | 0.6021 | 0.3375 | 0.1082 |
| 0.1109 | 72.52 | 19000 | 0.5851 | 0.3188 | 0.1021 |
| 0.0943 | 74.43 | 19500 | 0.5944 | 0.3238 | 0.1033 |
| 0.1418 | 76.34 | 20000 | 0.5904 | 0.3143 | 0.0997 |
| 0.1317 | 78.24 | 20500 | 0.6291 | 0.3283 | 0.1047 |
| 0.1177 | 80.15 | 21000 | 0.6114 | 0.3190 | 0.1000 |
| 0.1138 | 82.06 | 21500 | 0.6155 | 0.3245 | 0.1023 |
| 0.1074 | 83.97 | 22000 | 0.6094 | 0.3153 | 0.1004 |
| 0.11 | 85.88 | 22500 | 0.6041 | 0.3141 | 0.0988 |
| 0.1096 | 87.78 | 23000 | 0.6243 | 0.3110 | 0.0986 |
| 0.1017 | 89.69 | 23500 | 0.6110 | 0.3121 | 0.0984 |
| 0.1015 | 91.6 | 24000 | 0.6385 | 0.3093 | 0.0978 |
| 0.0952 | 93.51 | 24500 | 0.6155 | 0.3036 | 0.0953 |
| 0.0896 | 95.42 | 25000 | 0.6215 | 0.3033 | 0.0951 |
| 0.0953 | 97.33 | 25500 | 0.6293 | 0.3037 | 0.0953 |
| 0.0834 | 99.24 | 26000 | 0.6302 | 0.3036 | 0.0952 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.