modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
lkwate/roberta-base-mnli | a897fa70d81452bad5592776055417fd5fcae651 | 2022-01-08T11:45:51.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | lkwate | null | lkwate/roberta-base-mnli | 2 | 1 | transformers | 24,400 | Entry not found |
longcld/t5-small-e2e-qa | 14fe5bbdc7f8786bf35a3c90e1aaf2c4329c9cc3 | 2021-09-16T01:39:56.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | longcld | null | longcld/t5-small-e2e-qa | 2 | null | transformers | 24,401 | Entry not found |
loodos/bert-base-turkish-cased | 6dd34c6f96148479cb615d356bd405dc0a9901f3 | 2021-05-19T22:03:36.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"tr",
"transformers"
] | null | false | loodos | null | loodos/bert-base-turkish-cased | 2 | null | transformers | 24,402 | ---
language: tr
---
# Turkish Language Models with Huggingface's Transformers
As R&D Team at Loodos, we release cased and uncased versions of most recent language models for Turkish. More details about pretrained models and evaluations on downstream tasks can be found [here (our repo)](https://github.com/Loodos/turkish-language-models).
# Turkish BERT-Base (cased)
This is BERT-Base model which has 12 encoder layers with 768 hidden layer size trained on cased Turkish dataset.
## Usage
Using AutoModel and AutoTokenizer from Transformers, you can import the model as described below.
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("loodos/bert-base-turkish-cased")
model = AutoModel.from_pretrained("loodos/bert-base-turkish-cased")
```
## Details and Contact
You contact us to ask a question, open an issue or give feedback via our github [repo](https://github.com/Loodos/turkish-language-models).
## Acknowledgments
Many thanks to TFRC Team for providing us cloud TPUs on Tensorflow Research Cloud to train our models.
|
loodos/electra-base-turkish-64k-uncased-discriminator | 62f20d5453b04c05174be557edce528966dc3a65 | 2020-12-11T21:49:26.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"tr",
"transformers"
] | null | false | loodos | null | loodos/electra-base-turkish-64k-uncased-discriminator | 2 | null | transformers | 24,403 | ---
language: tr
---
# Turkish Language Models with Huggingface's Transformers
As R&D Team at Loodos, we release cased and uncased versions of most recent language models for Turkish. More details about pretrained models and evaluations on downstream tasks can be found [here (our repo)](https://github.com/Loodos/turkish-language-models).
# Turkish ELECTRA-Base-discriminator (uncased/64k)
This is ELECTRA-Base model's discriminator which has the same structure with BERT-Base trained on uncased Turkish dataset. This version has a vocab of size 64k, different from default 32k.
## Usage
Using AutoModelWithLMHead and AutoTokenizer from Transformers, you can import the model as described below.
```python
from transformers import AutoModel, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("loodos/electra-base-turkish-64k-uncased-discriminator", do_lower_case=False)
model = AutoModelWithLMHead.from_pretrained("loodos/electra-base-turkish-64k-uncased-discriminator")
normalizer = TextNormalization()
normalized_text = normalizer.normalize(text, do_lower_case=True, is_turkish=True)
tokenizer.tokenize(normalized_text)
```
### Notes on Tokenizers
Currently, Huggingface's tokenizers (which were written in Python) have a bug concerning letters "ı, i, I, İ" and non-ASCII Turkish specific letters. There are two reasons.
1- Vocabulary and sentence piece model is created with NFC/NFKC normalization but tokenizer uses NFD/NFKD. NFD/NFKD normalization changes text that contains Turkish characters I-ı, İ-i, Ç-ç, Ö-ö, Ş-ş, Ğ-ğ, Ü-ü. This causes wrong tokenization, wrong training and loss of information. Some tokens are never trained.(like "şanlıurfa", "öğün", "çocuk" etc.) NFD/NFKD normalization is not proper for Turkish.
2- Python's default ```string.lower()``` and ```string.upper()``` make the conversions
- "I" and "İ" to 'i'
- 'i' and 'ı' to 'I'
respectively. However, in Turkish, 'I' and 'İ' are two different letters.
We opened an [issue](https://github.com/huggingface/transformers/issues/6680) in Huggingface's github repo about this bug. Until it is fixed, in case you want to train your model with uncased data, we provide a simple text normalization module (`TextNormalization()` in the code snippet above) in our [repo](https://github.com/Loodos/turkish-language-models).
## Details and Contact
You contact us to ask a question, open an issue or give feedback via our github [repo](https://github.com/Loodos/turkish-language-models).
## Acknowledgments
Many thanks to TFRC Team for providing us cloud TPUs on Tensorflow Research Cloud to train our models.
|
loodos/electra-base-turkish-uncased-discriminator | 691a9d72338880821686993cbbf30f5292770528 | 2020-12-11T21:49:30.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"tr",
"transformers"
] | null | false | loodos | null | loodos/electra-base-turkish-uncased-discriminator | 2 | null | transformers | 24,404 | ---
language: tr
---
# Turkish Language Models with Huggingface's Transformers
As R&D Team at Loodos, we release cased and uncased versions of most recent language models for Turkish. More details about pretrained models and evaluations on downstream tasks can be found [here (our repo)](https://github.com/Loodos/turkish-language-models).
# Turkish ELECTRA-Base-discriminator (uncased)
This is ELECTRA-Base model's discriminator which has the same structure with BERT-Base trained on uncased Turkish dataset.
## Usage
Using AutoModelWithLMHead and AutoTokenizer from Transformers, you can import the model as described below.
```python
from transformers import AutoModel, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("loodos/electra-base-turkish-uncased-discriminator", do_lower_case=False)
model = AutoModelWithLMHead.from_pretrained("loodos/electra-base-turkish-uncased-discriminator")
normalizer = TextNormalization()
normalized_text = normalizer.normalize(text, do_lower_case=True, is_turkish=True)
tokenizer.tokenize(normalized_text)
```
### Notes on Tokenizers
Currently, Huggingface's tokenizers (which were written in Python) have a bug concerning letters "ı, i, I, İ" and non-ASCII Turkish specific letters. There are two reasons.
1- Vocabulary and sentence piece model is created with NFC/NFKC normalization but tokenizer uses NFD/NFKD. NFD/NFKD normalization changes text that contains Turkish characters I-ı, İ-i, Ç-ç, Ö-ö, Ş-ş, Ğ-ğ, Ü-ü. This causes wrong tokenization, wrong training and loss of information. Some tokens are never trained.(like "şanlıurfa", "öğün", "çocuk" etc.) NFD/NFKD normalization is not proper for Turkish.
2- Python's default ```string.lower()``` and ```string.upper()``` make the conversions
- "I" and "İ" to 'i'
- 'i' and 'ı' to 'I'
respectively. However, in Turkish, 'I' and 'İ' are two different letters.
We opened an [issue](https://github.com/huggingface/transformers/issues/6680) in Huggingface's github repo about this bug. Until it is fixed, in case you want to train your model with uncased data, we provide a simple text normalization module (`TextNormalization()` in the code snippet above) in our [repo](https://github.com/Loodos/turkish-language-models).
## Details and Contact
You contact us to ask a question, open an issue or give feedback via our github [repo](https://github.com/Loodos/turkish-language-models).
## Acknowledgments
Many thanks to TFRC Team for providing us cloud TPUs on Tensorflow Research Cloud to train our models.
|
loodos/electra-small-turkish-cased-discriminator | bfe21e665e673018d243d6b9c0ccd28166eeb4eb | 2020-12-11T21:49:33.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"tr",
"transformers"
] | null | false | loodos | null | loodos/electra-small-turkish-cased-discriminator | 2 | null | transformers | 24,405 | ---
language: tr
---
# Turkish Language Models with Huggingface's Transformers
As R&D Team at Loodos, we release cased and uncased versions of most recent language models for Turkish. More details about pretrained models and evaluations on downstream tasks can be found [here (our repo)](https://github.com/Loodos/turkish-language-models).
# Turkish ELECTRA-Small-discriminator (cased)
This is ELECTRA-Small model's discriminator which has 12 encoder layers with 256 hidden layers size trained on cased Turkish dataset.
## Usage
Using AutoModelWithLMHead and AutoTokenizer from Transformers, you can import the model as described below.
```python
from transformers import AutoModel, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("loodos/electra-small-turkish-cased-discriminator")
model = AutoModelWithLMHead.from_pretrained("loodos/electra-small-turkish-cased-discriminator")
```
## Details and Contact
You contact us to ask a question, open an issue or give feedback via our github [repo](https://github.com/Loodos/turkish-language-models).
## Acknowledgments
Many thanks to TFRC Team for providing us cloud TPUs on Tensorflow Research Cloud to train our models.
|
lsb/wav2vec2-base-it-latin | 95792abd7c808883ac48c678095c882750ce64de | 2022-03-24T11:51:21.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"la",
"dataset:lsb/poetaexmachina-mp3-recitations",
"transformers",
"robust-speech-event",
"hf-asr-leaderboard",
"license:agpl-3.0",
"model-index"
] | automatic-speech-recognition | false | lsb | null | lsb/wav2vec2-base-it-latin | 2 | 1 | transformers | 24,406 | ---
language:
- la
license: agpl-3.0
tags:
- robust-speech-event
- hf-asr-leaderboard
datasets:
- lsb/poetaexmachina-mp3-recitations
metrics:
- wer
model-index:
- name: wav2vec2-base-it-latin
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: lsb/poetaexmachina-mp3-recitations
name: Poeta Ex Machina mp3 recitations
metrics:
- type: wer
value: 0.398
name: Test WER
---
---
# wav2vec2-base-it-latin
This model is a fine-tuned version of [wav2vec2-base-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-base-it-voxpopuli)
The dataset used is the [poetaexmachina-mp3-recitations](https://github.com/lsb/poetaexmachina-mp3-recitations),
all of the 2-series texts (vergil) and every tenth 1-series text (words from Poeta Ex Machina's [database](https://github.com/lsb/poetaexmachina/blob/master/merged-scansions.db) of words with scansions).
It achieves the following [results](https://github.com/lsb/tironiculum/blame/trunk/wav2vec2%20base%20it%20latin.ipynb#L1234) on the evaluation set:
- Loss: 0.1943
- WER: 0.398
|
lsb/wav2vec2-base-pemlsb-la | 3e1c71baad1be8cb63d758dda7fc3d901fa7be6f | 2022-03-06T02:34:03.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"license:agpl-3.0"
] | automatic-speech-recognition | false | lsb | null | lsb/wav2vec2-base-pemlsb-la | 2 | 1 | transformers | 24,407 | ---
license: agpl-3.0
---
|
lucio/wav2vec2-large-xlsr-kinyarwanda | 24e7a8cf6cc9bb8c081b2056bb44276598d28038 | 2021-07-06T10:16:00.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"rw",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lucio | null | lucio/wav2vec2-large-xlsr-kinyarwanda | 2 | null | transformers | 24,408 | ---
language: rw
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large Kinyarwanda no punctuation
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice rw
type: common_voice
args: rw
metrics:
- name: Test WER
type: wer
value: 40.59
---
# Wav2Vec2-Large-XLSR-53-rw
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Kinyarwanda using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset, using about 20% of the training data (limited to utterances without downvotes and shorter than 9.5 seconds), and validated on 2048 utterances from the validation set. In contrast to the [lucio/wav2vec2-large-xlsr-kinyarwanda-apostrophied](https://huggingface.co/lucio/wav2vec2-large-xlsr-kinyarwanda-apostrophied) model, which predicts the apostrophes that mark contractions of pronouns with vowel-initial words, this model does not predict any punctuation.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# WARNING! This will download and extract to use about 80GB on disk.
test_dataset = load_dataset("common_voice", "rw", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("lucio/wav2vec2-large-xlsr-kinyarwanda")
model = Wav2Vec2ForCTC.from_pretrained("lucio/wav2vec2-large-xlsr-kinyarwanda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
Result:
```
Prediction: ['yaherukaga gukora igitaramo y iki mu jyiwa na mul mumbiliki', 'ini rero ntibizashoboka ka nibo nkunrabibzi']
Reference: ['Yaherukaga gukora igitaramo nk’iki mu Mujyi wa Namur mu Bubiligi.', 'Ibi rero, ntibizashoboka, kandi nawe arabizi.']
```
## Evaluation
The model can be evaluated as follows on the Kinyarwanda test data of Common Voice. Note that to even load the test data, the whole 40GB Kinyarwanda dataset will be downloaded and extracted into another 40GB directory, so you will need that space available on disk (e.g. not possible in the free tier of Google Colab). This script uses the `chunked_wer` function from [pcuenq](https://huggingface.co/pcuenq/wav2vec2-large-xlsr-53-es).
```python
import jiwer
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
import unidecode
test_dataset = load_dataset("common_voice", "rw", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("lucio/wav2vec2-large-xlsr-kinyarwanda")
model = Wav2Vec2ForCTC.from_pretrained("lucio/wav2vec2-large-xlsr-kinyarwanda")
model.to("cuda")
chars_to_ignore_regex = r'[!"#$%&()*+,./:;<=>?@\[\]\\_{}|~£¤¨©ª«¬®¯°·¸»¼½¾ðʺ˜˝ˮ‐–—―‚“”„‟•…″‽₋€™−√�]'
def remove_special_characters(batch):
batch["text"] = re.sub(r'[ʻʽʼ‘’´`]', r"'", batch["sentence"]) # normalize apostrophes
batch["text"] = re.sub(chars_to_ignore_regex, "", batch["text"]).lower().strip() # remove all other punctuation
batch["text"] = re.sub(r"(-| ?' ?| +)", " ", batch["text"]) # treat dash and apostrophe as word boundary
batch["text"] = unidecode.unidecode(batch["text"]) # strip accents
return batch
## Audio pre-processing
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
batch["sampling_rate"] = 16_000
return batch
def cv_prepare(batch):
batch = remove_special_characters(batch)
batch = speech_file_to_array_fn(batch)
return batch
test_dataset = test_dataset.map(cv_prepare)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
def chunked_wer(targets, predictions, chunk_size=None):
if chunk_size is None: return jiwer.wer(targets, predictions)
start = 0
end = chunk_size
H, S, D, I = 0, 0, 0, 0
while start < len(targets):
chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end])
H = H + chunk_metrics["hits"]
S = S + chunk_metrics["substitutions"]
D = D + chunk_metrics["deletions"]
I = I + chunk_metrics["insertions"]
start += chunk_size
end += chunk_size
return float(S + D + I) / float(H + S + D)
print("WER: {:2f}".format(100 * chunked_wer(result["sentence"], result["pred_strings"], chunk_size=4000)))
```
**Test Result**: 40.59 %
## Training
Blocks of examples from the Common Voice training dataset were used for training, after filtering out utterances that had any `down_vote` or were longer than 9.5 seconds. The data used totals about 100k examples, 20% of the available data. Training proceeded for 30k global steps, on 1 V100 GPU provided by OVHcloud. For validation, 2048 examples of the validation dataset were used.
The [script used for training](https://github.com/serapio/transformers/blob/feature/xlsr-finetune/examples/research_projects/wav2vec2/run_common_voice.py) is adapted from the [example script provided in the transformers repo](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py). |
lucius/distilroberta-base-finetuned-wikitext2 | 76843fcd257d27786dbac1ca38138987c47ade6a | 2021-10-17T10:40:14.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | lucius | null | lucius/distilroberta-base-finetuned-wikitext2 | 2 | null | transformers | 24,409 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0827 | 1.0 | 2406 | 1.9227 |
| 1.9993 | 2.0 | 4812 | 1.8828 |
| 1.9614 | 3.0 | 7218 | 1.8172 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
luffycodes/bb_narataka_roberta_base_nli_bsz_16_bb_bsz_16_nli_lr_3e6_bb_lr_3e6_wu_25k_grad_adam_mask | c4c63472e33ad093158e6cf96dfd34067d319cc2 | 2021-11-05T09:06:55.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/bb_narataka_roberta_base_nli_bsz_16_bb_bsz_16_nli_lr_3e6_bb_lr_3e6_wu_25k_grad_adam_mask | 2 | null | transformers | 24,410 | Entry not found |
luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_3e6_bb_lr_3e6_wu_25k_ep_10_grad_adam_mask | 07affeea6088f5a71eb05c22ddbd0a822df712f0 | 2021-11-04T16:48:44.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_3e6_bb_lr_3e6_wu_25k_ep_10_grad_adam_mask | 2 | null | transformers | 24,411 | Entry not found |
luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_5e6_bb_lr_5e6_wu_7k_grad_adam_orig_mask | b32f2c4c3f38dfb2b09efebfc4b15ddd312c4fae | 2021-11-02T22:00:58.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_5e6_bb_lr_5e6_wu_7k_grad_adam_orig_mask | 2 | null | transformers | 24,412 | Entry not found |
luffycodes/om_roberta_mnli_lr1e5_ep_3.model | b1f6275ba23e4b84af1d435731ca43c03094c8a5 | 2021-12-02T20:00:01.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/om_roberta_mnli_lr1e5_ep_3.model | 2 | null | transformers | 24,413 | Entry not found |
luffycodes/om_roberta_mnli_lr1e5_ep_5.model | 9ad3a067a347221a932c1de37078947c24968c48 | 2021-12-03T04:50:02.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/om_roberta_mnli_lr1e5_ep_5.model | 2 | null | transformers | 24,414 | Entry not found |
luffycodes/om_roberta_mnli_lr1e5_nli_bb_lambda_dot5_ep_3.model | e29ebf60a5b89b31610e4f29a62e6b303bb0153d | 2021-12-03T10:02:18.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/om_roberta_mnli_lr1e5_nli_bb_lambda_dot5_ep_3.model | 2 | null | transformers | 24,415 | Entry not found |
luffycodes/om_roberta_mnli_lr5e6_ep_10.model | 11c383277a3889fe586fa6b3563363f8cfb947ea | 2021-12-03T02:42:07.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/om_roberta_mnli_lr5e6_ep_10.model | 2 | null | transformers | 24,416 | Entry not found |
luffycodes/om_roberta_mnli_lr5e6_ep_3.model | 2b6eaec35e64cacec2a599308c33681b38264803 | 2021-12-02T19:24:47.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/om_roberta_mnli_lr5e6_ep_3.model | 2 | null | transformers | 24,417 | Entry not found |
luffycodes/om_roberta_mnli_lr5e6_ep_5.model | 1fdb9b28c8fe3390f6b1a7500e1a76207ae25c70 | 2021-12-02T13:49:14.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/om_roberta_mnli_lr5e6_ep_5.model | 2 | null | transformers | 24,418 | Entry not found |
luffycodes/om_roberta_mnli_lr5e6_nli_bb_lambda_dot5_ep_3.model | cb181930415a4e8e2ac47de00856179fafca4284 | 2021-12-03T09:01:19.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/om_roberta_mnli_lr5e6_nli_bb_lambda_dot5_ep_3.model | 2 | null | transformers | 24,419 | Entry not found |
luke-thorburn/suggest-conclusion-bias-only | 91dc46d7177e14830d0820640b1866dd49a37dad | 2022-07-12T10:08:32.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"en",
"transformers",
"argumentation",
"license:apache-2.0"
] | text-generation | false | luke-thorburn | null | luke-thorburn/suggest-conclusion-bias-only | 2 | null | transformers | 24,420 | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate the conclusion of an argument
This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where some parameters (only the bias parameters, not weights) have been finetuned on the task of generating the conclusion of an argument given its premises. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
Consider the facts:
* [premise 1]
* [premise 2]
...
* [premise n]
We must conclude that: [generated conclusion]
```
# Dataset
The parameters were finetuned using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
luke-thorburn/suggest-conclusion-full-finetune | ba3f6f388c4021c7daa67f328abe565784c30b63 | 2022-07-12T10:02:48.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"en",
"transformers",
"argumentation",
"license:apache-2.0"
] | text-generation | false | luke-thorburn | null | luke-thorburn/suggest-conclusion-full-finetune | 2 | null | transformers | 24,421 | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate the conclusion of an argument
This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where all parameters (both weights and biases) have been finetuned on the task of generating the conclusion of an argument given its premises. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
Consider the facts:
* [premise 1]
* [premise 2]
...
* [premise n]
We must conclude that: [generated conclusion]
```
# Dataset
The parameters were finetuned using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
luke-thorburn/suggest-conclusion-soft | 476cf30dee970d883ea9946bccb51b77110d9b13 | 2022-07-12T09:43:47.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"en",
"transformers",
"argumentation",
"license:apache-2.0"
] | text-generation | false | luke-thorburn | null | luke-thorburn/suggest-conclusion-soft | 2 | null | transformers | 24,422 | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate the conclusion of an argument
This model has the same model parameters as [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), but with an additional soft prompt which has been optimized on the task of generating the conclusion of an argument given its premises. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
[prepended soft prompt]- [premise 1]
- [premise 2]
...
- [premise n]
Conclusion: [generated conclusion]
```
# Dataset
The soft prompt was trained using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
luke-thorburn/suggest-intermediary-claims-bias-only | a6c9bcf4f5d7ccb263399b8d2725470b3205540b | 2022-07-12T10:06:29.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"en",
"transformers",
"argumentation",
"license:apache-2.0"
] | text-generation | false | luke-thorburn | null | luke-thorburn/suggest-intermediary-claims-bias-only | 2 | null | transformers | 24,423 | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate a chain of reasoning from one claim to another
This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where some parameters (only the bias parameters, not weights) have been finetuned on the task of generating a sequence of claims (a 'chain of reasoning') that joins one claim to another. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
Input: [start claim] -> [end claim]
Output: [start claim] -> [generated intermediate claim 1] -> ... -> [generated intermediate claim n] -> [end claim]
```
# Dataset
The parameters were finetuned using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
luke-thorburn/suggest-intermediary-claims-full-finetune | 6f0a922301bc8db61b90bfdf91ae723af5e76763 | 2022-07-12T09:56:47.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"en",
"transformers",
"argumentation",
"license:apache-2.0"
] | text-generation | false | luke-thorburn | null | luke-thorburn/suggest-intermediary-claims-full-finetune | 2 | null | transformers | 24,424 | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate a chain of reasoning from one claim to another
This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where all parameters (both weights and biases) have been finetuned on the task of generating a sequence of claims (a 'chain of reasoning') that joins one claim to another. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
Input: [start claim] -> [end claim]
Output: [start claim] -> [generated intermediate claim 1] -> ... -> [generated intermediate claim n] -> [end claim]
```
# Dataset
The parameters were finetuned using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
luke-thorburn/suggest-intermediary-claims-soft | b6a15c49640fe146cea10029e49302b6c201e6a5 | 2022-07-12T09:48:47.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"en",
"transformers",
"argumentation",
"license:apache-2.0"
] | text-generation | false | luke-thorburn | null | luke-thorburn/suggest-intermediary-claims-soft | 2 | null | transformers | 24,425 | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate a chain of reasoning from one claim to another
This model has the same model parameters as [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), but with an additional soft prompt which has been optimized on the task of generating a sequence of claims (a 'chain of reasoning') that joins one claim to another. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
[prepended soft prompt][start claim] -> [end claim]
Answer: [start claim] -> [generated intermediate claim 1] -> ... -> [generated intermediate claim n] -> [end claim]
```
# Dataset
The soft prompt was trained using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
luke-thorburn/suggest-objections-bias-only | eada9d7eace7ccfc9aebcb2467bb78d6e9ee6411 | 2022-07-12T10:08:02.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"en",
"transformers",
"argumentation",
"license:apache-2.0"
] | text-generation | false | luke-thorburn | null | luke-thorburn/suggest-objections-bias-only | 2 | null | transformers | 24,426 | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate objections to a claim
This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where some parameters (only the bias parameters, not weights) have been finetuned on the task of generating the objections to a claim, optionally given some example objections to that claim. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
List objections to the claim that: [original claim]
Objections:
* [objection 1]
* [objection 2]
...
* [objection n]
* [generated objection]
```
# Dataset
The parameters were finetuned using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
luke-thorburn/suggest-objections-full-finetune | 59aa3936b735332714fa88cd3b32814d3b2ee60f | 2022-07-12T09:54:28.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"en",
"transformers",
"argumentation",
"license:apache-2.0"
] | text-generation | false | luke-thorburn | null | luke-thorburn/suggest-objections-full-finetune | 2 | null | transformers | 24,427 | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate objections to a claim
This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where all parameters (both weights and biases) have been finetuned on the task of generating the objections to a claim, optionally given some example objections to that claim. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
List objections to the claim that: [original claim]
Objections:
* [objection 1]
* [objection 2]
...
* [objection n]
* [generated objection]
```
# Dataset
The parameters were finetuned using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
luke-thorburn/suggest-reasons-bias-only | 9f6c0dce33dafc42ae382b91565a1c1ce815d593 | 2022-07-12T10:07:19.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"en",
"transformers",
"argumentation",
"license:apache-2.0"
] | text-generation | false | luke-thorburn | null | luke-thorburn/suggest-reasons-bias-only | 2 | null | transformers | 24,428 | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate reasons that support a claim
This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where some parameters (only the bias parameters, not weights) have been finetuned on the task of generating reasons that support a claim, optionally given some example reasons. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
List reasons why: [original claim]
Reasons:
* [reason 1]
* [reason 2]
...
* [reason n]
* [generated reason]
```
# Dataset
The parameters were finetuned using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
luke-thorburn/suggest-reasons-full-finetune | 70dd3ee3c03d2ff9317ff618a7cf6ebf5c23022f | 2022-07-12T10:04:57.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"en",
"transformers",
"argumentation",
"license:apache-2.0"
] | text-generation | false | luke-thorburn | null | luke-thorburn/suggest-reasons-full-finetune | 2 | null | transformers | 24,429 | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate reasons that support a claim
This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where all parameters (both weights and biases) have been finetuned on the task of generating reasons that support a claim, optionally given some example reasons. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
List reasons why: [original claim]
Reasons:
* [reason 1]
* [reason 2]
...
* [reason n]
* [generated reason]
```
# Dataset
The parameters were finetuned using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
luke-thorburn/suggest-reasons-soft | 479c9ddd2cdb481e53a627e76febb1f1ed89c5d1 | 2022-07-12T09:45:30.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"en",
"transformers",
"argumentation",
"license:apache-2.0"
] | text-generation | false | luke-thorburn | null | luke-thorburn/suggest-reasons-soft | 2 | null | transformers | 24,430 | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate reasons that support a claim
This model has the same model parameters as [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), but with an additional soft prompt which has been optimized on the task of generating reasons that support a claim, optionally given some example reasons. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
[prepended soft prompt][original claim]
Pros:
- [reason 1]
- [reason 2]
...
- [reason n]
- [generated reason]
```
# Dataset
The soft prompt was trained using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
m3hrdadfi/hubert-large-greek-speech-emotion-recognition | a96a7d73766ccb2bd8790347afb8dc5af5da3ad8 | 2021-06-17T16:06:03.000Z | [
"pytorch",
"hubert",
"el",
"dataset:aesdd",
"transformers",
"audio",
"speech",
"speech-emotion-recognition",
"license:apache-2.0"
] | null | false | m3hrdadfi | null | m3hrdadfi/hubert-large-greek-speech-emotion-recognition | 2 | null | transformers | 24,431 | ---
language: el
datasets:
- aesdd
tags:
- audio
- speech
- speech-emotion-recognition
license: apache-2.0
---
# Emotion Recognition in Greek (el) Speech using HuBERT
## How to use
### Requirements
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
```
```bash
!git clone https://github.com/m3hrdadfi/soxan.git .
```
### Prediction
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchaudio
from transformers import AutoConfig, Wav2Vec2FeatureExtractor
from src.models import Wav2Vec2ForSpeechClassification, HubertForSpeechClassification
import librosa
import IPython.display as ipd
import numpy as np
import pandas as pd
```
```python
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_name_or_path = "m3hrdadfi/hubert-large-greek-speech-emotion-recognition"
config = AutoConfig.from_pretrained(model_name_or_path)
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path)
sampling_rate = feature_extractor.sampling_rate
model = HubertForSpeechClassification.from_pretrained(model_name_or_path).to(device)
```
```python
def speech_file_to_array_fn(path, sampling_rate):
speech_array, _sampling_rate = torchaudio.load(path)
resampler = torchaudio.transforms.Resample(_sampling_rate)
speech = resampler(speech_array).squeeze().numpy()
return speech
def predict(path, sampling_rate):
speech = speech_file_to_array_fn(path, sampling_rate)
inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True)
inputs = {key: inputs[key].to(device) for key in inputs}
with torch.no_grad():
logits = model(**inputs).logits
scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0]
outputs = [{"Emotion": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)]
return outputs
```
```python
path = "/path/to/disgust.wav"
outputs = predict(path, sampling_rate)
```
```bash
[
{'Emotion': 'anger', 'Score': '0.0%'},
{'Emotion': 'disgust', 'Score': '99.2%'},
{'Emotion': 'fear', 'Score': '0.1%'},
{'Emotion': 'happiness', 'Score': '0.3%'},
{'Emotion': 'sadness', 'Score': '0.5%'}
]
```
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
| Emotions | precision | recall | f1-score | accuracy |
|:---------:|:---------:|:------:|:--------:|:--------:|
| anger | 0.96 | 0.96 | 0.96 | |
| disgust | 1.00 | 0.96 | 0.98 | |
| fear | 1.00 | 0.83 | 0.91 | |
| happiness | 1.00 | 0.96 | 0.98 | |
| sadness | 0.81 | 1.00 | 0.89 | |
| | | | Overal | 0.94 |
## Questions?
Post a Github issue from [HERE](https://github.com/m3hrdadfi/soxan/issues). |
m3hrdadfi/wav2vec2-large-xlsr-lithuanian | d5b27b07dceb75975ccb840370181ff02edc4c90 | 2021-11-04T15:22:08.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"lt",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | m3hrdadfi | null | m3hrdadfi/wav2vec2-large-xlsr-lithuanian | 2 | null | transformers | 24,432 | ---
language: lt
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
widget:
- example_title: Common Voice sample 11
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-lithuanian/resolve/main/sample11.flac
- example_title: Common Voice sample 74
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-lithuanian/resolve/main/sample74.flac
model-index:
- name: XLSR Wav2Vec2 Lithuanian by Mehrdad Farahani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice lt
type: common_voice
args: lt
metrics:
- name: Test WER
type: wer
value: 34.66
---
# Wav2Vec2-Large-XLSR-53-Lithuanian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Lithuanian using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
**Requirements**
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
!pip install jiwer
```
**Normalizer**
```bash
!wget -O normalizer.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-lithuanian/raw/main/normalizer.py
```
**Prediction**
```python
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset
import numpy as np
import re
import string
import IPython.display as ipd
from normalizer import normalizer
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)[0]
return batch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-lithuanian")
model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-lithuanian").to(device)
dataset = load_dataset("common_voice", "lt", split="test[:1%]")
dataset = dataset.map(
normalizer,
fn_kwargs={"remove_extra_space": True},
remove_columns=list(set(dataset.column_names) - set(['sentence', 'path']))
)
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict)
max_items = np.random.randint(0, len(result), 20).tolist()
for i in max_items:
reference, predicted = result["sentence"][i], result["predicted"][i]
print("reference:", reference)
print("predicted:", predicted)
print('---')
```
**Output:**
```text
reference: jos tikslas buvo rasti kelią į ramųjį vandenyną šiaurės amerikoje
predicted: jos tikstas buvo rasikelia į ramų į vandenyna šiaurės amerikoje
---
reference: pietrytinėje dalyje likusių katalikų kapinių teritorija po antrojo pasaulinio karo dar padidėjo
predicted: pietrytinė daljelikusių gatalikų kapinių teritoriją pontro pasaulnio karo dar padidėjo
---
reference: koplyčioje pakabintas aušros vartų marijos paveikslas
predicted: koplyčioje pakagintas aušos fortų marijos paveikslas
---
reference: yra politinių debatų vedėjas
predicted: yra politinių debatų vedėjas
---
reference: žmogui taip pat gali būti mirtinai pavojingi
predicted: žmogui taip pat gali būti mirtinai pavojingi
---
reference: tuo pačiu metu kijeve nuverstas netekęs vokietijos paramos skoropadskis
predicted: tuo pačiu metu kiei venų verstas netekės vokietijos paramos kropadskis
---
reference: visos dvylika komandų tarpusavyje sužaidžia po dvi rungtynes
predicted: visos dvylika komandų tarpuso vysų žaidžia po dvi rungtynės
---
reference: kaukazo regioną sudaro kaukazo kalnai ir gretimos žemumos
predicted: kau kazo regioną sudaro kaukazo kalnai ir gretimos žemumus
---
reference: tarptautinių ir rusiškų šaškių kandidatas į sporto meistrus
predicted: tarptautinio ir rusiškos šaškių kandidatus į sporto meistrus
---
reference: prasideda putorano plynaukštės pietiniame pakraštyje
predicted: prasideda futorano prynaukštės pietiniame pakraštyje
---
reference: miestas skirstomas į senamiestį ir naujamiestį
predicted: miestas skirstomas į senamėsti ir naujamiestė
---
reference: tais pačiais metais pelnė bronzą pasaulio taurės kolumbijos etape komandinio sprinto rungtyje
predicted: tais pačiais metais pelnį mronsa pasaulio taurės kolumbijos etape komandinio sprento rungtyje
---
reference: prasideda putorano plynaukštės pietiniame pakraštyje
predicted: prasideda futorano prynaukštės pietiniame pakraštyje
---
reference: moterų tarptautinės meistrės vardas yra viena pakopa žemesnis už moterų tarptautinės korespondencinių šachmatų didmeistrės
predicted: moterų tarptautinės meistrės vardas yra gana pakopo žymesnis už moterų tarptautinės kūrespondencinių šachmatų didmesčias
---
reference: teritoriją dengia tropinės džiunglės
predicted: teritorija dengia tropinės žiunglės
---
reference: pastaroji dažnai pereina į nimcovičiaus gynybą arba bogoliubovo gynybą
predicted: pastaruoji dažnai pereina nimcovičiaus gynyba arba bogalių buvo gymyba
---
reference: už tai buvo suimtas ir tris mėnesius sėdėjo butyrkų kalėjime
predicted: užtai buvo sujumtas ir tris mėne susiedėjo butirkų kalėjime
---
reference: tai didžiausias pagal gyventojų skaičių regionas
predicted: tai didžiausias pagal gyventojų skaičių redionus
---
reference: vilkyškių miške taip pat auga raganų eglė
predicted: vilkiškimiškė taip pat auga ragano eglė
---
reference: kitas gavo skaraitiškės dvarą su palivarkais
predicted: kitas gavos karaitiškės dvarą spolivarkais
---
```
## Evaluation
The model can be evaluated as follows on the test data of Common Voice.
```python
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset, load_metric
import numpy as np
import re
import string
from normalizer import normalizer
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)[0]
return batch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-lithuanian")
model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-lithuanian").to(device)
dataset = load_dataset("common_voice", "lt", split="test")
dataset = dataset.map(
normalizer,
fn_kwargs={"remove_extra_space": True},
remove_columns=list(set(dataset.column_names) - set(['sentence', 'path']))
)
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict)
wer = load_metric("wer")
print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"])))
```
]
**Test Result**:
- WER: 34.66%
## Training & Report
The Common Voice `train`, `validation` datasets were used for training.
You can see the training states [here](https://wandb.ai/m3hrdadfi/wav2vec2_large_xlsr_lt/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-53-Lithuanian--Vmlldzo1OTM1MTU?accessToken=kdkpara4hcmjvrlpbfsnu4s8cdk3a0xeyrb84ycpr4k701n13hzr9q7s60b00swx)
The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Lithuanian_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb)
## Questions?
Post a Github issue on the [Wav2Vec](https://github.com/m3hrdadfi/wav2vec) repo. |
macedonizer/al-gpt2 | e72656be7d9c10111a3613ec28754d53da34d38f | 2021-09-14T16:17:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"al",
"dataset:wiki-al",
"transformers",
"license:apache-2.0"
] | text-generation | false | macedonizer | null | macedonizer/al-gpt2 | 2 | 1 | transformers | 24,433 | ---
language:
- al
thumbnail: https://huggingface.co/macedonizer/al-roberta-base/lets-talk-about-nlp-al.jpg
license: apache-2.0
datasets:
- wiki-al
---
# al-gpt2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
## Model description
al-gpt2 is a transformers model pretrained on a very large corpus of Albanian data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of the continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of a word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the Albania language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for, however, which is generating texts from a
prompt.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
import random
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained('macedonizer/al-gpt2') \
model = AutoModelWithLMHead.from_pretrained('macedonizer/al-gpt2')
input_text = 'Tirana'
if len(input_text) == 0: \
encoded_input = tokenizer(input_text, return_tensors="pt") \
output = model.generate( \
bos_token_id=random.randint(1, 50000), \
do_sample=True, \
top_k=50, \
max_length=1024, \
top_p=0.95, \
num_return_sequences=1, \
) \
else: \\
encoded_input = tokenizer(input_text, return_tensors="pt") \
output = model.generate( \
**encoded_input, \
bos_token_id=random.randint(1, 50000), \
do_sample=True, \
top_k=50, \
max_length=1024, \
top_p=0.95, \
num_return_sequences=1, \
)
decoded_output = [] \
for sample in output: \
decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True))
print(decoded_output) |
machinelord/bert_esa_ep4 | 99e437447360bff1b231efa94dd19d9fc45a5103 | 2021-05-19T22:30:33.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | machinelord | null | machinelord/bert_esa_ep4 | 2 | null | transformers | 24,434 | Entry not found |
madlag/bert-base-uncased-squad1.1-pruned-x3.2-v2 | d57ef92eb1f4b0d74cbdc086e1babf9c9adc6a4b | 2021-05-19T22:34:32.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | madlag | null | madlag/bert-base-uncased-squad1.1-pruned-x3.2-v2 | 2 | null | transformers | 24,435 | Entry not found |
madlag/bert-base-uncased-squadv1-x1.16-f88.1-d8-unstruct-v1 | 86c732c0f045a254f8a7e9bd59bfe96d1408cb81 | 2021-06-16T15:03:46.000Z | [
"pytorch",
"tf",
"bert",
"question-answering",
"en",
"dataset:squad",
"transformers",
"license:mit",
"autotrain_compatible"
] | question-answering | false | madlag | null | madlag/bert-base-uncased-squadv1-x1.16-f88.1-d8-unstruct-v1 | 2 | null | transformers | 24,436 | ---
language: en
thumbnail:
license: mit
tags:
- question-answering
-
-
datasets:
- squad
metrics:
- squad
widget:
- text: "Where is the Eiffel Tower located?"
context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower."
- text: "Who is Frederic Chopin?"
context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano."
---
## BERT-base uncased model fine-tuned on SQuAD v1
This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the **linear layers contains 8.0%** of the original weights.
The model contains **28.0%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method).
With a simple resizing of the linear matrices it ran **1.16x as fast as bert-base-uncased** on the evaluation.
This is possible because the pruning method lead to structured matrices: to visualize them, hover below on the plot to see the non-zero/zero parts of each matrix.
<div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x1.16-f88.1-d8-unstruct-v1/raw/main/model_card/density_info.js" id="c60d09ec-81ff-4d6f-b616-c3ef09b2175d"></script></div>
In terms of accuracy, its **F1 is 88.11**, compared with 88.5 for bert-base-uncased, a **F1 drop of 0.39**.
## Fine-Pruning details
This model was fine-tuned from the HuggingFace [model](https://huggingface.co/bert-base-uncased) checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the model [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad)
This model is case-insensitive: it does not make a difference between english and English.
A side-effect of the block pruning is that some of the attention heads are completely removed: 22 heads were removed on a total of 144 (15.3%).
Here is a detailed view on how the remaining heads are distributed in the network after pruning.
<div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x1.16-f88.1-d8-unstruct-v1/raw/main/model_card/pruning_info.js" id="55528c8b-d5f5-46a5-a35a-dad93725f7e5"></script></div>
## Details of the SQuAD1.1 dataset
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD1.1 | train | 90.6K |
| SQuAD1.1 | eval | 11.1k |
### Fine-tuning
- Python: `3.8.5`
- Machine specs:
```CPU: Intel(R) Core(TM) i7-6700K CPU
Memory: 64 GiB
GPUs: 1 GeForce GTX 3090, with 24GiB memory
GPU driver: 455.23.05, CUDA: 11.1
```
### Results
**Pytorch model file size**: `398MB` (original BERT: `420MB`)
| Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation |
| ------ | --------- | --------- | --------- |
| **EM** | **80.94** | **80.8** | **+0.14**|
| **F1** | **88.11** | **88.5** | **-0.39**|
## Example Usage
Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns.
`pip install nn_pruning`
Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded.
```python
from transformers import pipeline
from nn_pruning.inference_model_patcher import optimize_model
qa_pipeline = pipeline(
"question-answering",
model="madlag/bert-base-uncased-squadv1-x1.16-f88.1-d8-unstruct-v1",
tokenizer="madlag/bert-base-uncased-squadv1-x1.16-f88.1-d8-unstruct-v1"
)
print("bert-base-uncased parameters: 152.0M")
print(f"Parameters count (includes only head pruning, not feed forward pruning)={int(qa_pipeline.model.num_parameters() / 1E6)}M")
qa_pipeline.model = optimize_model(qa_pipeline.model, "dense")
print(f"Parameters count after complete optimization={int(qa_pipeline.model.num_parameters() / 1E6)}M")
predictions = qa_pipeline({
'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.",
'question': "Who is Frederic Chopin?",
})
print("Predictions", predictions)
``` |
madlag/bert-large-uncased-squadv2 | e1829c93de89e47fe3cbdf50e8c9b813d5a2cefb | 2021-05-19T22:43:07.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"arxiv:1810.04805",
"transformers",
"autotrain_compatible"
] | question-answering | false | madlag | null | madlag/bert-large-uncased-squadv2 | 2 | null | transformers | 24,437 | ## BERT-large finetuned on squad v2.
F1 on dev (from paper)[https://arxiv.org/pdf/1810.04805v2.pdf] is 81.9, we reach 81.58.
```
{'exact': 78.6321906847469,
'f1': 81.5816656803201,
'total': 11873,
'HasAns_exact': 73.73481781376518,
'HasAns_f1': 79.64222615088413,
'HasAns_total': 5928,
'NoAns_exact': 83.51555929352396,
'NoAns_f1': 83.51555929352396,
'NoAns_total': 5945,
'best_exact': 78.6321906847469,
'best_exact_thresh': 0.0,
'best_f1': 81.58166568032006,
'best_f1_thresh': 0.0,
'epoch': 1.59}
```
```
python run_qa.py \
--model_name_or_path bert-large-uncased \
--dataset_name squad_v2 \
--do_train \
--do_eval \
--save_steps 2500 \
--eval_steps 2500 \
--evaluation_strategy steps \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir bert-large-uncased-squadv2 \
--version_2_with_negative 1
``` |
maher13/English_ASR | de18fce77dde0c3a22e077d9db7714186086a7d1 | 2021-12-28T17:20:22.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | maher13 | null | maher13/English_ASR | 2 | null | transformers | 24,438 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: English_ASR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# English_ASR
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4971
- Wer: 0.3397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3432 | 4.0 | 500 | 1.1711 | 0.7767 |
| 0.5691 | 8.0 | 1000 | 0.4613 | 0.4357 |
| 0.2182 | 12.0 | 1500 | 0.4715 | 0.3853 |
| 0.1267 | 16.0 | 2000 | 0.4307 | 0.3607 |
| 0.0846 | 20.0 | 2500 | 0.4971 | 0.3537 |
| 0.0608 | 24.0 | 3000 | 0.4712 | 0.3419 |
| 0.0457 | 28.0 | 3500 | 0.4971 | 0.3397 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.13.3
- Tokenizers 0.10.3
|
malhajj/ArabGlossBERT | 517d7c42c6e9e3443ae5bf296f6fde6fde0bb79e | 2021-08-27T19:44:43.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | malhajj | null | malhajj/ArabGlossBERT | 2 | null | transformers | 24,439 | Entry not found |
malteos/aspect-acl-scibert-scivocab-uncased | d33d2b4d367da26dafe477dae01e13325144c7eb | 2021-11-22T10:09:21.000Z | [
"pytorch",
"bert",
"sci",
"en",
"dataset:acl-arc",
"arxiv:2010.06395",
"transformers",
"classification",
"similarity",
"license:mit"
] | null | false | malteos | null | malteos/aspect-acl-scibert-scivocab-uncased | 2 | null | transformers | 24,440 | ---
language:
- sci
- en
tags:
- classification
- similarity
license: mit
datasets:
- acl-arc
---
# Aspect-based Document Similarity for Research Papers
A `scibert-scivocab-uncased` model fine-tuned on the ACL Anthology corpus as in [Aspect-based Document Similarity for Research Papers](https://arxiv.org/abs/2010.06395).
<img src="https://raw.githubusercontent.com/malteos/aspect-document-similarity/master/docrel.png">
See GitHub for more details: https://github.com/malteos/aspect-document-similarity
## Demo
<a href="https://colab.research.google.com/github/malteos/aspect-document-similarity/blob/master/demo.ipynb"><img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Google Colab"></a>
You can try our trained models directly on Google Colab on all papers available on Semantic Scholar (via DOI, ArXiv ID, ACL ID, PubMed ID):
<a href="https://colab.research.google.com/github/malteos/aspect-document-similarity/blob/master/demo.ipynb"><img src="https://raw.githubusercontent.com/malteos/aspect-document-similarity/master/demo.gif" alt="Click here for demo"></a>
|
mamlong34/MiniLM-L6-snli_mnli_fever_anli_R1_R2_R3-nli | 5ff2e09d762ad49bf65776d3e5e6c4ee2ebee2b4 | 2021-10-05T03:26:48.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | mamlong34 | null | mamlong34/MiniLM-L6-snli_mnli_fever_anli_R1_R2_R3-nli | 2 | null | transformers | 24,441 | Entry not found |
manishiitg/bart-recruit-qa | 8e4c68961c03b0f4b4fbfe487237b0b1aadc6e81 | 2020-11-01T14:16:30.000Z | [
"pytorch",
"bart",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | manishiitg | null | manishiitg/bart-recruit-qa | 2 | null | transformers | 24,442 | Entry not found |
manishiitg/longformer-recruit-qa-large | d2742b51535087d9a665330c8e1399b7d242b391 | 2020-10-30T05:17:51.000Z | [
"pytorch",
"longformer",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | manishiitg | null | manishiitg/longformer-recruit-qa-large | 2 | null | transformers | 24,443 | Entry not found |
mapama247/test123 | 9cd8ea41135b8f6b7dda7892264fdd7fb5fb1823 | 2022-01-04T11:21:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | mapama247 | null | mapama247/test123 | 2 | null | transformers | 24,444 | Entry not found |
maple/distilbert-base-cased | d0807c3df21265f4af1349013a1719366533b6e5 | 2022-01-03T10:44:59.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | maple | null | maple/distilbert-base-cased | 2 | null | transformers | 24,445 | Entry not found |
marbogusz/bert-multi-cased-squad_sv | bb58517a3c37795cf9da9ae22f187a75769f362d | 2021-05-19T23:00:13.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | marbogusz | null | marbogusz/bert-multi-cased-squad_sv | 2 | null | transformers | 24,446 | Swedish bert multilingual model trained on a machine translated (MS neural translation) SQUAD 1.1 dataset
|
marciovbarbosa/t5-small-finetuned-de-to-en | 815896ab1203149922fb56ef3da79719d9703856 | 2021-12-04T00:56:09.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | marciovbarbosa | null | marciovbarbosa/t5-small-finetuned-de-to-en | 2 | null | transformers | 24,447 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-de-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: de-en
metrics:
- name: Bleu
type: bleu
value: 9.2166
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-to-en
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9417
- Bleu: 9.2166
- Gen Len: 17.3404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 272 | 2.1660 | 3.8515 | 17.6289 |
| 2.6678 | 2.0 | 544 | 2.0656 | 6.4422 | 17.4842 |
| 2.6678 | 3.0 | 816 | 2.0203 | 7.4348 | 17.3741 |
| 2.4316 | 4.0 | 1088 | 1.9926 | 8.0914 | 17.3658 |
| 2.4316 | 5.0 | 1360 | 1.9739 | 8.6535 | 17.3461 |
| 2.3307 | 6.0 | 1632 | 1.9603 | 8.8757 | 17.3768 |
| 2.3307 | 7.0 | 1904 | 1.9509 | 9.0744 | 17.3511 |
| 2.2945 | 8.0 | 2176 | 1.9466 | 9.1111 | 17.3418 |
| 2.2945 | 9.0 | 2448 | 1.9427 | 9.1969 | 17.3351 |
| 2.2666 | 10.0 | 2720 | 1.9417 | 9.2166 | 17.3404 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
marcolatella/emotion_trained | 3ee99130ed0cb28a655c451a04431ef3018cc681 | 2021-12-10T23:23:20.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | marcolatella | null | marcolatella/emotion_trained | 2 | null | transformers | 24,448 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: emotion_trained
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: F1
type: f1
value: 0.7377785764567545
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_trained
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9362
- F1: 0.7378
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.961635072722524e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 204 | 0.7468 | 0.6599 |
| No log | 2.0 | 408 | 0.6829 | 0.7369 |
| 0.5184 | 3.0 | 612 | 0.8089 | 0.7411 |
| 0.5184 | 4.0 | 816 | 0.9362 | 0.7378 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
marcolatella/emotion_trained_1234567 | 94db5480b2a8d99dd7f1fdc0a50b7b6309aff160 | 2021-12-11T21:27:53.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | marcolatella | null | marcolatella/emotion_trained_1234567 | 2 | null | transformers | 24,449 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: emotion_trained_1234567
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: F1
type: f1
value: 0.7328362995029661
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_trained_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9045
- F1: 0.7328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.961635072722524e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1234567
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 204 | 0.6480 | 0.7231 |
| No log | 2.0 | 408 | 0.6114 | 0.7403 |
| 0.5045 | 3.0 | 612 | 0.7593 | 0.7311 |
| 0.5045 | 4.0 | 816 | 0.9045 | 0.7328 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
marcolatella/emotion_trained_31415 | 7989f9a5c943898fbca95a048139ac2b02abaa6e | 2021-12-11T21:18:38.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | marcolatella | null | marcolatella/emotion_trained_31415 | 2 | null | transformers | 24,450 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: emotion_trained_31415
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: F1
type: f1
value: 0.7213200335291519
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_trained_31415
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9166
- F1: 0.7213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.961635072722524e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 31415
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 204 | 0.6182 | 0.7137 |
| No log | 2.0 | 408 | 0.7472 | 0.6781 |
| 0.5084 | 3.0 | 612 | 0.8242 | 0.7236 |
| 0.5084 | 4.0 | 816 | 0.9166 | 0.7213 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
marcolatella/emotion_trained_42 | 3260dfc50741e61422d6ae9815bf8de911abe640 | 2021-12-11T21:09:32.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | marcolatella | null | marcolatella/emotion_trained_42 | 2 | null | transformers | 24,451 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: emotion_trained_42
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: F1
type: f1
value: 0.7319321237976675
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_trained_42
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8988
- F1: 0.7319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.961635072722524e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 204 | 0.6131 | 0.6955 |
| No log | 2.0 | 408 | 0.5837 | 0.7270 |
| 0.5149 | 3.0 | 612 | 0.8925 | 0.7267 |
| 0.5149 | 4.0 | 816 | 0.8988 | 0.7319 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
marcolatella/hate_trained_1234567 | 4fb6ef83e9230fd89e52658a9ba17e5c4a4d72f3 | 2021-12-11T20:59:59.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | marcolatella | null | marcolatella/hate_trained_1234567 | 2 | null | transformers | 24,452 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: hate_trained_1234567
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: hate
metrics:
- name: F1
type: f1
value: 0.7750768993843997
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hate_trained_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7927
- F1: 0.7751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.7272339744854407e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1234567
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4835 | 1.0 | 563 | 0.4882 | 0.7534 |
| 0.3236 | 2.0 | 1126 | 0.5286 | 0.7590 |
| 0.2191 | 3.0 | 1689 | 0.6103 | 0.7717 |
| 0.1408 | 4.0 | 2252 | 0.7927 | 0.7751 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
maroo93/kd_squad1.1 | d29ea5a38b0d1f331ab48a784cabdf6f95ef5dd0 | 2021-05-19T23:04:36.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | maroo93 | null | maroo93/kd_squad1.1 | 2 | null | transformers | 24,453 | Entry not found |
masakhane/m2m100_418M_fr_fon_rel_news | 854f2dd069da4d91b227aafab083b4be9a2e8283 | 2022-04-16T18:54:30.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"fr",
"fon",
"dataset:JW300 + [LAFAND](https://github.com/masakhane-io/lafand-mt)",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_fr_fon_rel_news | 2 | null | transformers | 24,454 | Hugging Face's logo
---
language:
- fr
- fon
datasets:
- JW300 + [LAFAND](https://github.com/masakhane-io/lafand-mt)
---
# m2m100_418M-fr-fon-mt
## Model description
**m2m100_418M-fr-fon-mt** is a **machine translation** model from French to Fon based on a fine-tuned facebook/m2m100_418M model. It establishes a **baseline** for automatically translating texts from French to Fon.
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
Specifically, this model is a *m2m100_418M* model that was fine-tuned on JW300 Fon corpus and [LAFAND](https://github.com/masakhane-io/lafand-mt).
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning m2m100_418M achieves **4.96 BLEU** on [LAFAND test set](https://github.com/masakhane-io/lafand-mt)
### BibTeX entry and citation info
By David Adelani
```
```
|
matheusntg/character-bert-pt-normal | 7f9a15133182133f94151c144d8d673622a08ef8 | 2021-07-24T22:43:34.000Z | [
"pytorch",
"character_bert",
"transformers"
] | null | false | matheusntg | null | matheusntg/character-bert-pt-normal | 2 | null | transformers | 24,455 | Entry not found |
mattchurgin/distilbert-mrpc | 3c7d310ec455d72477e371ffccf6a2fb89d5c774 | 2021-12-31T22:53:22.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | mattchurgin | null | mattchurgin/distilbert-mrpc | 2 | null | transformers | 24,456 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilbert-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8480392156862745
- name: F1
type: f1
value: 0.8934707903780068
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-mrpc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6783
- Accuracy: 0.8480
- F1: 0.8935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5916 | 0.22 | 100 | 0.5676 | 0.7157 | 0.8034 |
| 0.5229 | 0.44 | 200 | 0.4534 | 0.7770 | 0.8212 |
| 0.5055 | 0.65 | 300 | 0.4037 | 0.8137 | 0.8762 |
| 0.4597 | 0.87 | 400 | 0.3706 | 0.8407 | 0.8893 |
| 0.4 | 1.09 | 500 | 0.4590 | 0.8113 | 0.8566 |
| 0.3498 | 1.31 | 600 | 0.4196 | 0.8554 | 0.8974 |
| 0.2916 | 1.53 | 700 | 0.4606 | 0.8554 | 0.8933 |
| 0.3309 | 1.74 | 800 | 0.5162 | 0.8578 | 0.9027 |
| 0.3788 | 1.96 | 900 | 0.3911 | 0.8529 | 0.8980 |
| 0.2059 | 2.18 | 1000 | 0.5842 | 0.8554 | 0.8995 |
| 0.1595 | 2.4 | 1100 | 0.5701 | 0.8578 | 0.8975 |
| 0.1205 | 2.61 | 1200 | 0.6905 | 0.8407 | 0.8889 |
| 0.174 | 2.83 | 1300 | 0.6783 | 0.8480 | 0.8935 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
mattchurgin/distilbert-sst2 | 76f660b04759db41e9b9e89bd79aa3377cbc277e | 2021-12-31T23:08:41.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | mattchurgin | null | mattchurgin/distilbert-sst2 | 2 | null | transformers | 24,457 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: distilbert-sst2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4182
- eval_accuracy: 0.8911
- eval_runtime: 1.8021
- eval_samples_per_second: 483.882
- eval_steps_per_second: 60.485
- epoch: 0.8
- step: 6700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
maximedb/test | fea14bae5cc0aee7333848ac760286cbaa58fd5b | 2021-10-11T15:35:56.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | maximedb | null | maximedb/test | 2 | null | transformers | 24,458 | Entry not found |
mazancourt/politics-sentence-classifier | 4dfe68da01fa7248891a79e02bddca4ea3989394 | 2021-10-20T16:14:02.000Z | [
"pytorch",
"camembert",
"text-classification",
"fr",
"dataset:mazancourt/autonlp-data-politics-sentence-classifier",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | mazancourt | null | mazancourt/politics-sentence-classifier | 2 | 2 | transformers | 24,459 | ---
tags: autonlp
language: fr
widget:
- text: "Il y a dans ce pays une fracture"
datasets:
- mazancourt/autonlp-data-politics-sentence-classifier
co2_eq_emissions: 1.06099358268878
---
# Prediction of sentence "nature" in a French political sentence
This model aims at predicting the nature of a sentence in a French political sentence.
The predictions fall in three categories:
- `problem`: the sentence describes a problem (usually to be tackled by the speaker), for example _il y a dans ce pays une fracture_ (J. Chirac)
- `solution`: the sentences describes a solution (typically part of a political programme), for example: _J’ai supprimé les droits de succession parce que je crois au travail et parce que je crois à la famille._ (N. Sarkozy)
- `other`: the sentence does not belong to any of these categories, for example: _vive la République, vive la France_
This model was trained using AutoNLP based on sentences extracted from a mix of political tweets and speeches.
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 23105051
- CO2 Emissions (in grams): 1.06099358268878
## Validation Metrics
- Loss: 0.6050735712051392
- Accuracy: 0.8097826086956522
- Macro F1: 0.7713543865034599
- Micro F1: 0.8097826086956522
- Weighted F1: 0.8065488494385247
- Macro Precision: 0.7861074705111403
- Micro Precision: 0.8097826086956522
- Weighted Precision: 0.806470454156932
- Macro Recall: 0.7599656456873758
- Micro Recall: 0.8097826086956522
- Weighted Recall: 0.8097826086956522
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "Il y a dans ce pays une fracture"}' https://api-inference.huggingface.co/models/mazancourt/politics-sentence-classifier
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("mazancourt/autonlp-politics-sentence-classifier-23105051", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("mazancourt/politics-sentence-classifier", use_auth_token=True)
inputs = tokenizer("Il y a dans ce pays une fracture", return_tensors="pt")
outputs = model(**inputs)
# Category can be "problem", "solution" or "other"
category = outputs[0]["label"]
score = outputs[0]["score"]
``` |
mbeukman/xlm-roberta-base-finetuned-ner-amharic | 0826f5a4901cd6a9036facd50dc47b3ea08594ff | 2022-02-22T11:32:33.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"am",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"NER",
"autotrain_compatible"
] | token-classification | false | mbeukman | null | mbeukman/xlm-roberta-base-finetuned-ner-amharic | 2 | null | transformers | 24,460 | ---
language:
- am
tags:
- NER
- token-classification
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "ቀዳሚው የሶማሌ ክልል በአወዳይ ከተማ ለተገደሉ የክልሉ ተወላጆች ያከናወነው የቀብር ስነ ስርዓትን የተመለከተ ዘገባ ነው ፡፡"
---
# xlm-roberta-base-finetuned-ner-amharic
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Amharic part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-amharic) (This model) | [base](https://huggingface.co/xlm-roberta-base) | amh | 72.63 | 70.49 | 74.91 | 76.00 | 75.00 | 52.00 | 78.00 |
| [xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic) | [amh](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-amharic) | amh | 79.55 | 76.71 | 82.62 | 70.00 | 84.00 | 62.00 | 91.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | amh | 70.34 | 69.72 | 70.97 | 72.00 | 75.00 | 51.00 | 73.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-amharic'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "ቀዳሚው የሶማሌ ክልል በአወዳይ ከተማ ለተገደሉ የክልሉ ተወላጆች ያከናወነው የቀብር ስነ ስርዓትን የተመለከተ ዘገባ ነው ፡፡"
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-ner-naija | a2ab6cbcd564f94647e4918c0a8761bd22945107 | 2021-11-25T09:04:38.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"pcm",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"NER",
"autotrain_compatible"
] | token-classification | false | mbeukman | null | mbeukman/xlm-roberta-base-finetuned-ner-naija | 2 | null | transformers | 24,461 | ---
language:
- pcm
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Mixed Martial Arts joinbodi , Ultimate Fighting Championship , UFC don decide say dem go enta back di octagon on Saturday , 9 May , for Jacksonville , Florida ."
---
# xlm-roberta-base-finetuned-ner-naija
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Nigerian Pidgin part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-ner-naija](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-naija) (This model) | [base](https://huggingface.co/xlm-roberta-base) | pcm | 88.89 | 88.13 | 89.66 | 92.00 | 87.00 | 82.00 | 94.00 |
| [xlm-roberta-base-finetuned-naija-finetuned-ner-naija](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-naija) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | pcm | 88.06 | 87.04 | 89.12 | 90.00 | 88.00 | 81.00 | 92.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-naija](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-naija) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | pcm | 89.12 | 87.84 | 90.42 | 90.00 | 89.00 | 82.00 | 94.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-naija'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Mixed Martial Arts joinbodi , Ultimate Fighting Championship , UFC don decide say dem go enta back di octagon on Saturday , 9 May , for Jacksonville , Florida ."
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-yoruba | 16f8b473a5959d3f78935b6da1fc5f274a8c2238 | 2021-11-25T09:05:18.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"yo",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"NER",
"autotrain_compatible"
] | token-classification | false | mbeukman | null | mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-yoruba | 2 | null | transformers | 24,462 | ---
language:
- yo
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Kò sí ẹ̀rí tí ó fi ẹsẹ̀ rinlẹ̀ ."
---
# xlm-roberta-base-finetuned-yoruba-finetuned-ner-yoruba
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-yoruba](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Yoruba part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-yoruba-finetuned-ner-yoruba](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-yoruba) (This model) | [yor](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) | yor | 83.68 | 79.92 | 87.82 | 78.00 | 86.00 | 74.00 | 92.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-yoruba](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-yoruba) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | yor | 80.29 | 78.34 | 82.35 | 77.00 | 82.00 | 73.00 | 86.00 |
| [xlm-roberta-base-finetuned-ner-yoruba](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-yoruba) | [base](https://huggingface.co/xlm-roberta-base) | yor | 78.22 | 77.21 | 79.26 | 77.00 | 80.00 | 71.00 | 82.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-yoruba'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Kò sí ẹ̀rí tí ó fi ẹsẹ̀ rinlẹ̀ ."
ner_results = nlp(example)
print(ner_results)
```
|
mboth/distil-eng | 9540b388703f3f3a6de7585c6c122304d0f3a253 | 2021-06-25T10:18:12.000Z | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mboth | null | mboth/distil-eng | 2 | 1 | transformers | 24,463 | Entry not found |
megagonlabs/optimus-amzn | af204b42758eaefefa1b7a1992eab54b2c1f5cd2 | 2021-09-11T00:16:57.000Z | [
"pytorch",
"en",
"transformers",
"summarization",
"license:bsd-3-clause"
] | summarization | false | megagonlabs | null | megagonlabs/optimus-amzn | 2 | null | transformers | 24,464 | ---
language: en
tags:
- summarization
inference: false
license: bsd-3-clause
---
## Optimus model
See original GitHub repo for more details [here](https://github.com/megagonlabs/coop)
|
merve/deberta-small-mrpc | f65f5e8eb93f89dfcd9c41176826e96121f1cae5 | 2021-11-24T15:06:28.000Z | [
"pytorch",
"tensorboard",
"transformers",
"text-classification"
] | text-classification | false | merve | null | merve/deberta-small-mrpc | 2 | null | transformers | 24,465 | ---
tags:
- transformers
- text-classification
pipeline-tag:
- text-classification
---
Title |
mewmew/DialoGPT-small-rick | 1fe74027aa9008e5e964a6fac2c95c8a22894929 | 2021-09-25T23:04:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | mewmew | null | mewmew/DialoGPT-small-rick | 2 | null | transformers | 24,466 | ---
tags:
- conversational
---
# Rick DialoGPT Model |
mflorinsky/distilbert-base-uncased-finetuned-cola | 7c0d45b7dcd8f6f30e5ba6d2042a9076a05cb876 | 2021-11-28T18:48:18.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | mflorinsky | null | mflorinsky/distilbert-base-uncased-finetuned-cola | 2 | null | transformers | 24,467 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5225783911538823
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8753
- Matthews Correlation: 0.5226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5307 | 1.0 | 535 | 0.5040 | 0.4210 |
| 0.358 | 2.0 | 1070 | 0.5018 | 0.5024 |
| 0.2402 | 3.0 | 1605 | 0.6434 | 0.4946 |
| 0.1825 | 4.0 | 2140 | 0.7442 | 0.5184 |
| 0.1304 | 5.0 | 2675 | 0.8753 | 0.5226 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
miaomiaomiao/macbert_ngram_miao | 51ca44675349bca2134cb73153efe9243665e395 | 2021-05-19T23:22:00.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | miaomiaomiao | null | miaomiaomiao/macbert_ngram_miao | 2 | null | transformers | 24,468 | for contest
|
michaelhsieh42/distilbert-base-uncased-finetuned-cola | 05f970343cb51433d599e3d1b96cc5c4f1f1bcd0 | 2022-01-21T23:23:08.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | michaelhsieh42 | null | michaelhsieh42/distilbert-base-uncased-finetuned-cola | 2 | null | transformers | 24,469 | Entry not found |
microsoft/unispeech-sat-base-sv | 35751ea97835323f8eb3414e7339b7f52b202373 | 2021-12-17T18:11:05.000Z | [
"pytorch",
"unispeech-sat",
"audio-xvector",
"en",
"dataset:librispeech_asr",
"arxiv:2110.05752",
"transformers",
"speech"
] | null | false | microsoft | null | microsoft/unispeech-sat-base-sv | 2 | null | transformers | 24,470 | ---
language:
- en
datasets:
- librispeech_asr
tags:
- speech
---
# UniSpeech-SAT-Base for Speaker Verification
[Microsoft's UniSpeech](https://www.microsoft.com/en-us/research/publication/unispeech-unified-speech-representation-learning-with-labeled-and-unlabeled-data/)
The model was pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz.
The model was pre-trained on:
- 960 hours of [LibriSpeech](https://huggingface.co/datasets/librispeech_asr)
[Paper: UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER
AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752)
Authors: Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu
**Abstract**
*Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisely and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks..*
The original model can be found under https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT.
# Fine-tuning details
The model is fine-tuned on the [VoxCeleb1 dataset](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) using an X-Vector head with an Additive Margin Softmax loss
[X-Vectors: Robust DNN Embeddings for Speaker Recognition](https://www.danielpovey.com/files/2018_icassp_xvectors.pdf)
# Usage
## Speaker Verification
```python
from transformers import Wav2Vec2FeatureExtractor, UniSpeechSatForXVector
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('microsoft/unispeech-sat-base-sv')
model = UniSpeechSatForXVector.from_pretrained('microsoft/unispeech-sat-base-sv')
# audio files are decoded on the fly
inputs = feature_extractor(dataset[:2]["audio"]["array"], return_tensors="pt")
embeddings = model(**inputs).embeddings
embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu()
# the resulting embeddings can be used for cosine similarity-based retrieval
cosine_sim = torch.nn.CosineSimilarity(dim=-1)
similarity = cosine_sim(embeddings[0], embeddings[1])
threshold = 0.86 # the optimal threshold is dataset-dependent
if similarity < threshold:
print("Speakers are not the same!")
```
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
 |
midas/gupshup_e2e_mbart | ce264ea41088e4d822ff1a66dd9ac5f8251a6112 | 2021-11-14T02:06:19.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"arxiv:1910.04073",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | midas | null | midas/gupshup_e2e_mbart | 2 | null | transformers | 24,471 | # Gupshup
GupShup: Summarizing Open-Domain Code-Switched Conversations EMNLP 2021
Paper: [https://aclanthology.org/2021.emnlp-main.499.pdf](https://aclanthology.org/2021.emnlp-main.499.pdf)
Github: [https://github.com/midas-research/gupshup](https://github.com/midas-research/gupshup)
### Dataset
Please request for the Gupshup data using [this Google form](https://docs.google.com/forms/d/1zvUk7WcldVF3RCoHdWzQPzPprtSJClrnHoIOYbzaJEI/edit?ts=61381ec0).
Dataset is available for `Hinglish Dilaogues to English Summarization`(h2e) and `English Dialogues to English Summarization`(e2e). For each task, Dialogues/conversastion have `.source`(train.source) as file extension whereas Summary has `.target`(train.target) file extension. ".source" file need to be provided to `input_path` and ".target" file to `reference_path` argument in the scripts.
## Models
All model weights are available on the Huggingface model hub. Users can either directly download these weights in their local and provide this path to `model_name` argument in the scripts or use the provided alias (to `model_name` argument) in scripts directly; this will lead to download weights automatically by scripts.
Model names were aliased in "gupshup_TASK_MODEL" sense, where "TASK" can be h2e,e2e and MODEL can be mbart, pegasus, etc., as listed below.
**1. Hinglish Dialogues to English Summary (h2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_h2e_mbart](https://huggingface.co/midas/gupshup_h2e_mbart) |
| PEGASUS | [midas/gupshup_h2e_pegasus](https://huggingface.co/midas/gupshup_h2e_pegasus) |
| T5 MTL | [midas/gupshup_h2e_t5_mtl](https://huggingface.co/midas/gupshup_h2e_t5_mtl) |
| T5 | [midas/gupshup_h2e_t5](https://huggingface.co/midas/gupshup_h2e_t5) |
| BART | [midas/gupshup_h2e_bart](https://huggingface.co/midas/gupshup_h2e_bart) |
| GPT-2 | [midas/gupshup_h2e_gpt](https://huggingface.co/midas/gupshup_h2e_gpt) |
**2. English Dialogues to English Summary (e2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_e2e_mbart](https://huggingface.co/midas/gupshup_e2e_mbart) |
| PEGASUS | [midas/gupshup_e2e_pegasus](https://huggingface.co/midas/gupshup_e2e_pegasus) |
| T5 MTL | [midas/gupshup_e2e_t5_mtl](https://huggingface.co/midas/gupshup_e2e_t5_mtl) |
| T5 | [midas/gupshup_e2e_t5](https://huggingface.co/midas/gupshup_e2e_t5) |
| BART | [midas/gupshup_e2e_bart](https://huggingface.co/midas/gupshup_e2e_bart) |
| GPT-2 | [midas/gupshup_e2e_gpt](https://huggingface.co/midas/gupshup_e2e_gpt) |
## Inference
### Using command line
1. Clone this repo and create a python virtual environment (https://docs.python.org/3/library/venv.html). Install the required packages using
```
git clone https://github.com/midas-research/gupshup.git
pip install -r requirements.txt
```
2. run_eval script has the following arguments.
* **model_name** : Path or alias to one of our models available on Huggingface as listed above.
* **input_path** : Source file or path to file containing conversations, which will be summarized.
* **save_path** : File path where to save summaries generated by the model.
* **reference_path** : Target file or path to file containing summaries, used to calculate matrices.
* **score_path** : File path where to save scores.
* **bs** : Batch size
* **device**: Cuda devices to use.
Please make sure you have downloaded the Gupshup dataset using the above google form and provide the correct path to these files in the argument's `input_path` and `refrence_path.` Or you can simply put `test.source` and `test.target` in `data/h2e/`(hinglish to english) or `data/e2e/`(english to english) folder. For example, to generate English summaries from Hinglish dialogues using the mbart model, run the following command
```
python run_eval.py \
--model_name midas/gupshup_h2e_mbart \
--input_path data/h2e/test.source \
--save_path generated_summary.txt \
--reference_path data/h2e/test.target \
--score_path scores.txt \
--bs 8
```
Another example, to generate English summaries from English dialogues using the Pegasus model
```
python run_eval.py \
--model_name midas/gupshup_e2e_pegasus \
--input_path data/e2e/test.source \
--save_path generated_summary.txt \
--reference_path data/e2e/test.target \
--score_path scores.txt \
--bs 8
```
Please create an issue if you are facing any difficulties in replicating the results.
### References
Please cite [[1]](https://arxiv.org/abs/1910.04073) if you found the resources in this repository useful.
[1] Mehnaz, Laiba, Debanjan Mahata, Rakesh Gosangi, Uma Sushmitha Gunturi, Riya Jain, Gauri Gupta, Amardeep Kumar, Isabelle G. Lee, Anish Acharya, and Rajiv Shah. [*GupShup: Summarizing Open-Domain Code-Switched Conversations*](https://aclanthology.org/2021.emnlp-main.499.pdf)
```
@inproceedings{mehnaz2021gupshup,
title={GupShup: Summarizing Open-Domain Code-Switched Conversations},
author={Mehnaz, Laiba and Mahata, Debanjan and Gosangi, Rakesh and Gunturi, Uma Sushmitha and Jain, Riya and Gupta, Gauri and Kumar, Amardeep and Lee, Isabelle G and Acharya, Anish and Shah, Rajiv},
booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
pages={6177--6192},
year={2021}
}
```
|
midas/gupshup_h2e_bart | 93c2f3a3f442b35535190291be2b46bc5d53c13d | 2021-11-14T02:09:56.000Z | [
"pytorch",
"bart",
"text2text-generation",
"arxiv:1910.04073",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | midas | null | midas/gupshup_h2e_bart | 2 | null | transformers | 24,472 | # Gupshup
GupShup: Summarizing Open-Domain Code-Switched Conversations EMNLP 2021
Paper: [https://aclanthology.org/2021.emnlp-main.499.pdf](https://aclanthology.org/2021.emnlp-main.499.pdf)
Github: [https://github.com/midas-research/gupshup](https://github.com/midas-research/gupshup)
### Dataset
Please request for the Gupshup data using [this Google form](https://docs.google.com/forms/d/1zvUk7WcldVF3RCoHdWzQPzPprtSJClrnHoIOYbzaJEI/edit?ts=61381ec0).
Dataset is available for `Hinglish Dilaogues to English Summarization`(h2e) and `English Dialogues to English Summarization`(e2e). For each task, Dialogues/conversastion have `.source`(train.source) as file extension whereas Summary has `.target`(train.target) file extension. ".source" file need to be provided to `input_path` and ".target" file to `reference_path` argument in the scripts.
## Models
All model weights are available on the Huggingface model hub. Users can either directly download these weights in their local and provide this path to `model_name` argument in the scripts or use the provided alias (to `model_name` argument) in scripts directly; this will lead to download weights automatically by scripts.
Model names were aliased in "gupshup_TASK_MODEL" sense, where "TASK" can be h2e,e2e and MODEL can be mbart, pegasus, etc., as listed below.
**1. Hinglish Dialogues to English Summary (h2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_h2e_mbart](https://huggingface.co/midas/gupshup_h2e_mbart) |
| PEGASUS | [midas/gupshup_h2e_pegasus](https://huggingface.co/midas/gupshup_h2e_pegasus) |
| T5 MTL | [midas/gupshup_h2e_t5_mtl](https://huggingface.co/midas/gupshup_h2e_t5_mtl) |
| T5 | [midas/gupshup_h2e_t5](https://huggingface.co/midas/gupshup_h2e_t5) |
| BART | [midas/gupshup_h2e_bart](https://huggingface.co/midas/gupshup_h2e_bart) |
| GPT-2 | [midas/gupshup_h2e_gpt](https://huggingface.co/midas/gupshup_h2e_gpt) |
**2. English Dialogues to English Summary (e2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_e2e_mbart](https://huggingface.co/midas/gupshup_e2e_mbart) |
| PEGASUS | [midas/gupshup_e2e_pegasus](https://huggingface.co/midas/gupshup_e2e_pegasus) |
| T5 MTL | [midas/gupshup_e2e_t5_mtl](https://huggingface.co/midas/gupshup_e2e_t5_mtl) |
| T5 | [midas/gupshup_e2e_t5](https://huggingface.co/midas/gupshup_e2e_t5) |
| BART | [midas/gupshup_e2e_bart](https://huggingface.co/midas/gupshup_e2e_bart) |
| GPT-2 | [midas/gupshup_e2e_gpt](https://huggingface.co/midas/gupshup_e2e_gpt) |
## Inference
### Using command line
1. Clone this repo and create a python virtual environment (https://docs.python.org/3/library/venv.html). Install the required packages using
```
git clone https://github.com/midas-research/gupshup.git
pip install -r requirements.txt
```
2. run_eval script has the following arguments.
* **model_name** : Path or alias to one of our models available on Huggingface as listed above.
* **input_path** : Source file or path to file containing conversations, which will be summarized.
* **save_path** : File path where to save summaries generated by the model.
* **reference_path** : Target file or path to file containing summaries, used to calculate matrices.
* **score_path** : File path where to save scores.
* **bs** : Batch size
* **device**: Cuda devices to use.
Please make sure you have downloaded the Gupshup dataset using the above google form and provide the correct path to these files in the argument's `input_path` and `refrence_path.` Or you can simply put `test.source` and `test.target` in `data/h2e/`(hinglish to english) or `data/e2e/`(english to english) folder. For example, to generate English summaries from Hinglish dialogues using the mbart model, run the following command
```
python run_eval.py \
--model_name midas/gupshup_h2e_mbart \
--input_path data/h2e/test.source \
--save_path generated_summary.txt \
--reference_path data/h2e/test.target \
--score_path scores.txt \
--bs 8
```
Another example, to generate English summaries from English dialogues using the Pegasus model
```
python run_eval.py \
--model_name midas/gupshup_e2e_pegasus \
--input_path data/e2e/test.source \
--save_path generated_summary.txt \
--reference_path data/e2e/test.target \
--score_path scores.txt \
--bs 8
```
Please create an issue if you are facing any difficulties in replicating the results.
### References
Please cite [[1]](https://arxiv.org/abs/1910.04073) if you found the resources in this repository useful.
[1] Mehnaz, Laiba, Debanjan Mahata, Rakesh Gosangi, Uma Sushmitha Gunturi, Riya Jain, Gauri Gupta, Amardeep Kumar, Isabelle G. Lee, Anish Acharya, and Rajiv Shah. [*GupShup: Summarizing Open-Domain Code-Switched Conversations*](https://aclanthology.org/2021.emnlp-main.499.pdf)
```
@inproceedings{mehnaz2021gupshup,
title={GupShup: Summarizing Open-Domain Code-Switched Conversations},
author={Mehnaz, Laiba and Mahata, Debanjan and Gosangi, Rakesh and Gunturi, Uma Sushmitha and Jain, Riya and Gupta, Gauri and Kumar, Amardeep and Lee, Isabelle G and Acharya, Anish and Shah, Rajiv},
booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
pages={6177--6192},
year={2021}
}
```
|
midas/gupshup_h2e_pegasus | b378deee3cfa0c487fe58d1818621608d9845ba6 | 2021-11-14T02:09:12.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"arxiv:1910.04073",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | midas | null | midas/gupshup_h2e_pegasus | 2 | null | transformers | 24,473 | # Gupshup
GupShup: Summarizing Open-Domain Code-Switched Conversations EMNLP 2021
Paper: [https://aclanthology.org/2021.emnlp-main.499.pdf](https://aclanthology.org/2021.emnlp-main.499.pdf)
Github: [https://github.com/midas-research/gupshup](https://github.com/midas-research/gupshup)
### Dataset
Please request for the Gupshup data using [this Google form](https://docs.google.com/forms/d/1zvUk7WcldVF3RCoHdWzQPzPprtSJClrnHoIOYbzaJEI/edit?ts=61381ec0).
Dataset is available for `Hinglish Dilaogues to English Summarization`(h2e) and `English Dialogues to English Summarization`(e2e). For each task, Dialogues/conversastion have `.source`(train.source) as file extension whereas Summary has `.target`(train.target) file extension. ".source" file need to be provided to `input_path` and ".target" file to `reference_path` argument in the scripts.
## Models
All model weights are available on the Huggingface model hub. Users can either directly download these weights in their local and provide this path to `model_name` argument in the scripts or use the provided alias (to `model_name` argument) in scripts directly; this will lead to download weights automatically by scripts.
Model names were aliased in "gupshup_TASK_MODEL" sense, where "TASK" can be h2e,e2e and MODEL can be mbart, pegasus, etc., as listed below.
**1. Hinglish Dialogues to English Summary (h2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_h2e_mbart](https://huggingface.co/midas/gupshup_h2e_mbart) |
| PEGASUS | [midas/gupshup_h2e_pegasus](https://huggingface.co/midas/gupshup_h2e_pegasus) |
| T5 MTL | [midas/gupshup_h2e_t5_mtl](https://huggingface.co/midas/gupshup_h2e_t5_mtl) |
| T5 | [midas/gupshup_h2e_t5](https://huggingface.co/midas/gupshup_h2e_t5) |
| BART | [midas/gupshup_h2e_bart](https://huggingface.co/midas/gupshup_h2e_bart) |
| GPT-2 | [midas/gupshup_h2e_gpt](https://huggingface.co/midas/gupshup_h2e_gpt) |
**2. English Dialogues to English Summary (e2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_e2e_mbart](https://huggingface.co/midas/gupshup_e2e_mbart) |
| PEGASUS | [midas/gupshup_e2e_pegasus](https://huggingface.co/midas/gupshup_e2e_pegasus) |
| T5 MTL | [midas/gupshup_e2e_t5_mtl](https://huggingface.co/midas/gupshup_e2e_t5_mtl) |
| T5 | [midas/gupshup_e2e_t5](https://huggingface.co/midas/gupshup_e2e_t5) |
| BART | [midas/gupshup_e2e_bart](https://huggingface.co/midas/gupshup_e2e_bart) |
| GPT-2 | [midas/gupshup_e2e_gpt](https://huggingface.co/midas/gupshup_e2e_gpt) |
## Inference
### Using command line
1. Clone this repo and create a python virtual environment (https://docs.python.org/3/library/venv.html). Install the required packages using
```
git clone https://github.com/midas-research/gupshup.git
pip install -r requirements.txt
```
2. run_eval script has the following arguments.
* **model_name** : Path or alias to one of our models available on Huggingface as listed above.
* **input_path** : Source file or path to file containing conversations, which will be summarized.
* **save_path** : File path where to save summaries generated by the model.
* **reference_path** : Target file or path to file containing summaries, used to calculate matrices.
* **score_path** : File path where to save scores.
* **bs** : Batch size
* **device**: Cuda devices to use.
Please make sure you have downloaded the Gupshup dataset using the above google form and provide the correct path to these files in the argument's `input_path` and `refrence_path.` Or you can simply put `test.source` and `test.target` in `data/h2e/`(hinglish to english) or `data/e2e/`(english to english) folder. For example, to generate English summaries from Hinglish dialogues using the mbart model, run the following command
```
python run_eval.py \
--model_name midas/gupshup_h2e_mbart \
--input_path data/h2e/test.source \
--save_path generated_summary.txt \
--reference_path data/h2e/test.target \
--score_path scores.txt \
--bs 8
```
Another example, to generate English summaries from English dialogues using the Pegasus model
```
python run_eval.py \
--model_name midas/gupshup_e2e_pegasus \
--input_path data/e2e/test.source \
--save_path generated_summary.txt \
--reference_path data/e2e/test.target \
--score_path scores.txt \
--bs 8
```
Please create an issue if you are facing any difficulties in replicating the results.
### References
Please cite [[1]](https://arxiv.org/abs/1910.04073) if you found the resources in this repository useful.
[1] Mehnaz, Laiba, Debanjan Mahata, Rakesh Gosangi, Uma Sushmitha Gunturi, Riya Jain, Gauri Gupta, Amardeep Kumar, Isabelle G. Lee, Anish Acharya, and Rajiv Shah. [*GupShup: Summarizing Open-Domain Code-Switched Conversations*](https://aclanthology.org/2021.emnlp-main.499.pdf)
```
@inproceedings{mehnaz2021gupshup,
title={GupShup: Summarizing Open-Domain Code-Switched Conversations},
author={Mehnaz, Laiba and Mahata, Debanjan and Gosangi, Rakesh and Gunturi, Uma Sushmitha and Jain, Riya and Gupta, Gauri and Kumar, Amardeep and Lee, Isabelle G and Acharya, Anish and Shah, Rajiv},
booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
pages={6177--6192},
year={2021}
}
```
|
mideind/IceBERT-igc | 4b43558290678ec915caf8e7d51ea678e5924b16 | 2022-03-17T13:50:44.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"is",
"arxiv:2201.05601",
"transformers",
"icelandic",
"masked-lm",
"license:agpl-3.0",
"autotrain_compatible"
] | fill-mask | false | mideind | null | mideind/IceBERT-igc | 2 | null | transformers | 24,474 | ---
language: is
widget:
- text: Má bjóða þér <mask> í kvöld?
- text: Forseti <mask> er ágæt.
- text: Súpan var <mask> á bragðið.
tags:
- roberta
- icelandic
- masked-lm
- pytorch
license: agpl-3.0
---
# IceBERT-igc
This model was trained with fairseq using the RoBERTa-base architecture. It is one of many models we have trained for Icelandic, see the paper referenced below for further details. The training data used is shown in the table below.
| Dataset | Size | Tokens |
|------------------------------------------------------|---------|--------|
| Icelandic Gigaword Corpus v20.05 (IGC) | 8.2 GB | 1,388M |
## Citation
The model is described in this paper [https://arxiv.org/abs/2201.05601](https://arxiv.org/abs/2201.05601). Please cite the paper if you make use of the model.
```
@article{DBLP:journals/corr/abs-2201-05601,
author = {V{\'{e}}steinn Sn{\ae}bjarnarson and
Haukur Barri S{\'{\i}}monarson and
P{\'{e}}tur Orri Ragnarsson and
Svanhv{\'{\i}}t Lilja Ing{\'{o}}lfsd{\'{o}}ttir and
Haukur P{\'{a}}ll J{\'{o}}nsson and
Vilhj{\'{a}}lmur {\TH}orsteinsson and
Hafsteinn Einarsson},
title = {A Warm Start and a Clean Crawled Corpus - {A} Recipe for Good Language
Models},
journal = {CoRR},
volume = {abs/2201.05601},
year = {2022},
url = {https://arxiv.org/abs/2201.05601},
eprinttype = {arXiv},
eprint = {2201.05601},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-05601.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
mikaelsouza/msft-smaller-model | 3d7db9f8d312ce50a099b596f3f77b3d85fb4a6a | 2021-11-02T21:07:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | mikaelsouza | null | mikaelsouza/msft-smaller-model | 2 | null | transformers | 24,475 | Entry not found |
mimi/ke-t5-base-ko-AIHub-paper-summary | 43668c0b706b228b2463cee0dfc7a07db13ec135 | 2022-01-03T07:34:01.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mimi | null | mimi/ke-t5-base-ko-AIHub-paper-summary | 2 | null | transformers | 24,476 | Entry not found |
minemile/distilbert-base-uncased-finetuned-imdb | 795e9764cf2759ebaf76bb0d5bc9c7eb7aa421e1 | 2021-12-03T15:15:45.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | minemile | null | minemile/distilbert-base-uncased-finetuned-imdb | 2 | null | transformers | 24,477 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.707 | 1.0 | 157 | 2.4883 |
| 2.572 | 2.0 | 314 | 2.4240 |
| 2.5377 | 3.0 | 471 | 2.4355 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
minu/koelectra-nsmc-discriminator | 7d66b977dd623ddd83b429525b254127b517c483 | 2020-07-24T04:47:37.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | minu | null | minu/koelectra-nsmc-discriminator | 2 | null | transformers | 24,478 | Entry not found |
minwoo/myelectra-small-generator | d6d8f7ac0744b390f759f390f495de00505a29f6 | 2020-07-25T10:30:32.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | minwoo | null | minwoo/myelectra-small-generator | 2 | null | transformers | 24,479 | Entry not found |
mk3smo/dialogpt-med-ahiru | 8b84b1a5a4a5b1611ca2899b4daf9473ab5ca8d7 | 2022-01-01T00:48:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | mk3smo | null | mk3smo/dialogpt-med-ahiru | 2 | null | transformers | 24,480 | ---
tags:
- conversational
---
# yea |
model-mili/DailoGPT-Yukub-v3 | cbe798383a28e9690e2d56153974bea4705f7f8e | 2021-11-24T22:31:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | model-mili | null | model-mili/DailoGPT-Yukub-v3 | 2 | null | transformers | 24,481 | ---
tags:
- conversational
---
# Dailo-GPT small Yukub model v3 |
mohsenfayyaz/albert-base-v2-offenseval2019-downsample | b6cfb0cd7e145d481a8ab67c93b8f56217975d0d | 2021-05-03T13:32:38.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | mohsenfayyaz | null | mohsenfayyaz/albert-base-v2-offenseval2019-downsample | 2 | null | transformers | 24,482 | Entry not found |
mohsenfayyaz/albert-base-v2-toxicity | f019d970b615c98429614f266cb09fad41a71653 | 2021-04-19T15:03:51.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | mohsenfayyaz | null | mohsenfayyaz/albert-base-v2-toxicity | 2 | null | transformers | 24,483 | Entry not found |
mollypak/bert-model-full-cardiff | 78cead942911fa01eaafc2a21d333625a2cff4b6 | 2021-12-09T10:14:21.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | mollypak | null | mollypak/bert-model-full-cardiff | 2 | null | transformers | 24,484 | Entry not found |
mollypak/bert-multilingual-base | 604c27316299d7f90c1b72ce7650bec4d7104536 | 2021-11-29T12:03:26.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | mollypak | null | mollypak/bert-multilingual-base | 2 | null | transformers | 24,485 | Entry not found |
mollypak/cardiff-num | 1cdeeccbba3f01a34d28e94856a614d515368ba3 | 2021-12-16T07:25:44.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | mollypak | null | mollypak/cardiff-num | 2 | null | transformers | 24,486 | Entry not found |
mollypak/roberta-base | b4997e68413787ef641a5479ae9279e3c9ed59a6 | 2021-12-11T20:40:20.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | mollypak | null | mollypak/roberta-base | 2 | null | transformers | 24,487 | Entry not found |
mollypak/roberta-model-full | 475436bef0d928f0c3cdbdb069e2ab2c76b435ff | 2021-12-07T09:33:18.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | mollypak | null | mollypak/roberta-model-full | 2 | null | transformers | 24,488 | Entry not found |
mollypak/roberta-tiny-model-full | 4acdb92d34f8fe8a8969bc111132f43456f26e7d | 2021-12-07T12:37:47.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | mollypak | null | mollypak/roberta-tiny-model-full | 2 | null | transformers | 24,489 | Entry not found |
mollypak/twitter-roberta-base-sentiment-cardiff | 1c69045b9fb5a9c4e31862b3be98f51a3e16f2ee | 2021-12-10T07:20:33.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | mollypak | null | mollypak/twitter-roberta-base-sentiment-cardiff | 2 | null | transformers | 24,490 | Entry not found |
moma1820/DSV-Classifier | 31a4395f9841295a9890304a5ca910e6500645cb | 2021-11-18T14:27:14.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | moma1820 | null | moma1820/DSV-Classifier | 2 | null | transformers | 24,491 | Entry not found |
monologg/kocharelectra-base-modu-ner-nx | 0f9a0710697dfb7c04f276a194b1c556c8d2e55a | 2020-12-07T07:48:11.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | monologg | null | monologg/kocharelectra-base-modu-ner-nx | 2 | null | transformers | 24,492 | Entry not found |
monologg/koelectra-base-v3-bias | 79ee034c1b9fd115b58595e4f32d8437f3b4b80b | 2021-01-07T14:18:11.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | monologg | null | monologg/koelectra-base-v3-bias | 2 | null | transformers | 24,493 | Entry not found |
monsoon-nlp/byt5-dv | 6924d3e9065fb5e3c612faa6882fa0850ca6373e | 2021-07-07T03:38:14.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"dv",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | monsoon-nlp | null | monsoon-nlp/byt5-dv | 2 | null | transformers | 24,494 | ---
language: dv
---
# byt5-dv
Pretrained from scratch on Dhivei (language of the Maldives)
with ByT5, Google's new byte-level tokenizer strategy.
Corpus: dv.wikipedia.org as of March 2020 (TFDS)
Notebook - Pretraining on Wikipedia: https://colab.research.google.com/drive/19Afq7CI6cOi1DaTpnQhBbEbnBzLSFHbH
## Demo
Notebook - Finetuning on Maldivian news classification task: https://colab.research.google.com/drive/11u5SafR4bKICmArgDl6KQ9vqfYtDpyWp
Current performance:
- mBERT: 52%
- **byt5-dv**: 81%
- dv-wave (ELECTRA): 89%
- dv-muril: 90.7%
- dv-labse: 91.3-91.5%
Source of dataset: https://github.com/Sofwath/DhivehiDatasets
## Work in progress - todos
The Wikipedia corpus is too small for this language. In the future I would add
OSCAR and Sofwath's Maldivian corpus, if I can rewrite the script to accept those
as one TFDS dataset.
This is based on ByT5-small ... we should try a larger model
This needs more time for pretraining |
monsoon-nlp/dv-labse | 6f37940921c6fd03d1afefa764980e71cc41e0ff | 2021-05-19T23:58:00.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"dv",
"transformers",
"autotrain_compatible"
] | fill-mask | false | monsoon-nlp | null | monsoon-nlp/dv-labse | 2 | null | transformers | 24,495 | ---
language: dv
---
# dv-labse
This is an experiment in cross-lingual transfer learning, to insert Dhivehi word and
word-piece tokens into Google's LaBSE model.
- Original model weights: https://huggingface.co/setu4993/LaBSE
- Original model announcement: https://ai.googleblog.com/2020/08/language-agnostic-bert-sentence.html
This currently outperforms dv-wave and dv-MuRIL (a similar transfer learning model) on
the Maldivian News Classification task https://github.com/Sofwath/DhivehiDatasets
- mBERT: 52%
- dv-wave (ELECTRA): 89%
- dv-muril: 90.7%
- dv-labse: 91.3-91.5% (may continue training)
## Training
- Start with LaBSE (similar to mBERT) with no Thaana vocabulary
- Based on PanLex dictionaries, attach 1,100 Dhivehi words to Sinhalese or English embeddings
- Add remaining words and word-pieces from dv-wave's vocabulary to vocab.txt
- Continue BERT pretraining on Dhivehi text
CoLab notebook:
https://colab.research.google.com/drive/1CUn44M2fb4Qbat2pAvjYqsPvWLt1Novi
|
monsoon-nlp/no-phone-gpt2 | c594d0dc16a5e205384e74d6dc536da0ea775c1c | 2021-05-23T10:04:57.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"exbert",
"license:mit"
] | text-generation | false | monsoon-nlp | null | monsoon-nlp/no-phone-gpt2 | 2 | null | transformers | 24,496 | ---
language: en
tags:
- exbert
license: mit
---
# no-phone-gpt2
This is a test to remove memorized private information, such as phone numbers, from a small GPT-2 model. This should not generate valid phone numbers.
Inspired by BAIR privacy research:
- https://bair.berkeley.edu/blog/2019/08/13/memorization/
- https://bair.berkeley.edu/blog/2020/12/20/lmmem/
[Blog post](https://mapmeld.medium.com/scrambling-memorized-info-in-gpt-2-60753d7652d8)
## Process
- All +## and +### tokens were replaced with new, randomly-selected 2- and 3-digit numbers in the vocab.json and tokenizer.json. You can identify these in outputs because the new tokens start with ^^.
- Input and output embeddings for +## and +### tokens were moved to the +00 and +000 embeddings.
- Removed associations between numbers from merges.txt
Using a library such as [ecco](https://github.com/jalammar/ecco), probabilities for next number token look equally likely, with +000 preferred.
Code: https://colab.research.google.com/drive/1X31TIZjmxlXMXAzQrR3Fl1AnLzGBCpWf#scrollTo=0GVFwrAgY68J
### Future goals
- Add new +### tokens to rebuild number generation
- Fine-tune new tokens on counting numbers and ended phone numbers
- Use [gpt2-large](https://huggingface.co/gpt2-large)
### BibTeX entry and citation info
Original GPT-2:
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
|
moshew/miny-bert-aug-sst2-distilled | 12ae20e93fe59d884c5e5a1eaea0c0a081756947 | 2022-02-17T11:48:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:augmented_glue_sst2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | moshew | null | moshew/miny-bert-aug-sst2-distilled | 2 | null | transformers | 24,497 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- augmented_glue_sst2
metrics:
- accuracy
model-index:
- name: miny-bert-aug-sst2-distilled
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: augmented_glue_sst2
type: augmented_glue_sst2
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9128440366972477
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# miny-bert-aug-sst2-distilled
This model is a fine-tuned version of [google/bert_uncased_L-4_H-256_A-4](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4) on the augmented_glue_sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2643
- Accuracy: 0.9128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.602 | 1.0 | 6227 | 0.3389 | 0.9186 |
| 0.4195 | 2.0 | 12454 | 0.2989 | 0.9151 |
| 0.3644 | 3.0 | 18681 | 0.2794 | 0.9117 |
| 0.3304 | 4.0 | 24908 | 0.2793 | 0.9106 |
| 0.3066 | 5.0 | 31135 | 0.2659 | 0.9186 |
| 0.2881 | 6.0 | 37362 | 0.2668 | 0.9140 |
| 0.2754 | 7.0 | 43589 | 0.2643 | 0.9128 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
motiondew/bert-set_date_3-lr-3e-5-bs-32-ep-3 | 1d3c9aee80dd385b7feaa4f3b29f526279460078 | 2021-06-25T12:54:25.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | motiondew | null | motiondew/bert-set_date_3-lr-3e-5-bs-32-ep-3 | 2 | null | transformers | 24,498 | Entry not found |
motiondew/set_date_1-bert | f6a952f8be30d365d0ff7dc577d8ce496da5898b | 2021-06-22T20:31:53.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | motiondew | null | motiondew/set_date_1-bert | 2 | null | transformers | 24,499 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.