modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cocoaclef/DialoGPT-small-kohaku | 1ab3547574b96508e6edd75445e27ce76a619af0 | 2021-11-12T18:39:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | cocoaclef | null | cocoaclef/DialoGPT-small-kohaku | 2 | null | transformers | 23,800 | ---
tags:
- conversational
---
# Kohaku DialoGPT Model |
codeceejay/HIYACCENT_Wav2Vec2 | ea52116d3f7234c7fefe0940b05546113e8caec3 | 2022-02-21T12:39:51.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | codeceejay | null | codeceejay/HIYACCENT_Wav2Vec2 | 2 | 1 | transformers | 23,801 | HIYACCENT: An Improved Nigerian-Accented Speech Recognition System Based on Contrastive Learning
The global objective of this research was to develop a more robust model for the Nigerian English Speakers whose English pronunciations are heavily affected by their mother tongue. For this, the Wav2Vec-HIYACCENT model was proposed which introduced a new layer to the Novel Facebook Wav2vec to capture the disparity between the baseline model and Nigerian English Speeches. A CTC loss was also inserted on top of the model which adds flexibility to the speech-text alignment. This resulted in over 20% improvement in the performance for NAE.T
Fine-tuned facebook/wav2vec2-large on English using the UISpeech Corpus. When using this model, make sure that your speech input is sampled at 16kHz.
The script used for training can be found here: https://github.com/amceejay/HIYACCENT-NE-Speech-Recognition-System
##Usage: The model can be used directly (without a language model) as follows...
#Using the ASRecognition library:
from asrecognition import ASREngine
asr = ASREngine("fr", model_path="codeceejay/HIYACCENT_Wav2Vec2")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = asr.transcribe(audio_paths)
##Writing your own inference speech:
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "en"
MODEL_ID = "codeceejay/HIYACCENT_Wav2Vec2"
SAMPLES = 10
#You can use common_voice/timit or Nigerian Accented Speeches can also be found here: https://openslr.org/70/
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
|
coiour/mymodel001 | dbc5986360e4f164a98af636cc82ef3197cbd3d9 | 2021-11-02T10:05:22.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | coiour | null | coiour/mymodel001 | 2 | null | transformers | 23,802 | Entry not found |
coldfir3/xlm-roberta-base-finetuned-panx-de-fr | 6d9d35789a4f20ea8fb6e36edd69af2552e39d9a | 2022-01-02T18:32:48.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | coldfir3 | null | coldfir3/xlm-roberta-base-finetuned-panx-de-fr | 2 | null | transformers | 23,803 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1667
- F1: 0.8582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2885 | 1.0 | 715 | 0.1817 | 0.8287 |
| 0.1497 | 2.0 | 1430 | 0.1618 | 0.8442 |
| 0.0944 | 3.0 | 2145 | 0.1667 | 0.8582 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
coldfir3/xlm-roberta-base-finetuned-panx-en | efcf033852c5824d583d22bcde59c5e1ac7cb975 | 2022-01-02T19:20:00.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | coldfir3 | null | coldfir3/xlm-roberta-base-finetuned-panx-en | 2 | null | transformers | 23,804 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.7075365579302588
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3925
- F1: 0.7075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1493 | 1.0 | 50 | 0.5884 | 0.4748 |
| 0.5135 | 2.0 | 100 | 0.4088 | 0.6623 |
| 0.3558 | 3.0 | 150 | 0.3925 | 0.7075 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
coldfir3/xlm-roberta-base-finetuned-panx-it | d2806a88c10971bdd90dc6c640132f9cbbcaf863 | 2022-01-02T19:04:55.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | coldfir3 | null | coldfir3/xlm-roberta-base-finetuned-panx-it | 2 | null | transformers | 23,805 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.822805578342904
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2323
- F1: 0.8228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8126 | 1.0 | 70 | 0.3361 | 0.7231 |
| 0.2995 | 2.0 | 140 | 0.2526 | 0.8079 |
| 0.1865 | 3.0 | 210 | 0.2323 | 0.8228 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
comodoro/wav2vec2-xls-r-300m-sk-cv8 | caba6c9c8800778823ce75d96bed9ee2eb56ea3a | 2022-03-24T11:55:26.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sk",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | comodoro | null | comodoro/wav2vec2-xls-r-300m-sk-cv8 | 2 | null | transformers | 23,806 | ---
language:
- sk
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- xlsr-fine-tuning-week
- hf-asr-leaderboard
datasets:
- common_voice
model-index:
- name: Slovak comodoro Wav2Vec2 XLSR 300M CV8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: sk
metrics:
- name: Test WER
type: wer
value: 49.6
- name: Test CER
type: cer
value: 13.3
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sk
metrics:
- name: Test WER
type: wer
value: 81.7
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: sk
metrics:
- name: Test WER
type: wer
value: 80.26
---
# wav2vec2-xls-r-300m-cs-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice 8.0 dataset.
It achieves the following results on the evaluation set:
- WER: 0.49575384615384616
- CER: 0.13333333333333333
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("mozilla-foundation/common_voice_8_0", "sk", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-sk-cv8")
model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-sk-cv8")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated using the attached `eval.py` script:
```
python eval.py --model_id comodoro/wav2vec2-xls-r-300m-sk-cv8 --dataset mozilla-foundation/common_voice_8_0 --split test --config sk
```
## Training and evaluation data
The Common Voice 8.0 `train` and `validation` datasets were used for training
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-4
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 640
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
conjuring92/distilroberta-base-finetuned-toxic | eec88eef234e7b5c8943fab9083cc5a3c0b3d129 | 2022-02-01T18:24:09.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | conjuring92 | null | conjuring92/distilroberta-base-finetuned-toxic | 2 | null | transformers | 23,807 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-toxic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-toxic
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5338 | 1.0 | 313 | 2.3127 |
| 2.4482 | 2.0 | 626 | 2.2985 |
| 2.4312 | 3.0 | 939 | 2.2411 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0
- Datasets 1.18.1
- Tokenizers 0.10.3
|
countrysideid/opus-mt-en-zh-chk1 | ce66f7c6d19c324e8931b2ab6d6c65db024d8b1e | 2022-02-13T20:18:05.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | countrysideid | null | countrysideid/opus-mt-en-zh-chk1 | 2 | null | transformers | 23,808 | Entry not found |
cowTodd/adalm-bio-small | 9276e6c18578a9ef1685fbce598f8639836ce9a4 | 2021-09-18T06:10:11.000Z | [
"pytorch",
"transformers"
] | null | false | cowTodd | null | cowTodd/adalm-bio-small | 2 | null | transformers | 23,809 | Entry not found |
cowTodd/adalm-cs-base | 7602f985f91812d7c897d530fcf7e428fc50daec | 2021-09-18T06:47:03.000Z | [
"pytorch",
"transformers"
] | null | false | cowTodd | null | cowTodd/adalm-cs-base | 2 | null | transformers | 23,810 | Entry not found |
crang/wav2vec2-large-xlsr-53-frisian | 614a464e19279a663e177825ea29f6d70b2d3c64 | 2021-07-06T00:53:59.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"fy-NL",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | crang | null | crang/wav2vec2-large-xlsr-53-frisian | 2 | null | transformers | 23,811 | ---
language: fy-NL
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Frisian XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fy-NL
type: common_voice
args: fy-NL
metrics:
- name: Test WER
type: wer
value: 19.11
---
# Wav2Vec2-Large-XLSR-53-Frisian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Frisian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fy-NL", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("crang/wav2vec2-large-xlsr-53-frisian")
model = Wav2Vec2ForCTC.from_pretrained("crang/wav2vec2-large-xlsr-53-frisian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Frisian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fy-NL", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("crang/wav2vec2-large-xlsr-53-frisian")
model = Wav2Vec2ForCTC.from_pretrained("crang/wav2vec2-large-xlsr-53-frisian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\u2013\u2014\;\:\"\\%\\\]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 19.11 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
creat89/NER_FEDA_Cyrillic1 | b43fd211dcd76e4310d53a73c0f7845721a4ea8e | 2022-04-13T09:07:44.000Z | [
"pytorch",
"bert",
"multilingual",
"ru",
"bg",
"mk",
"uk",
"fi",
"transformers",
"labse",
"ner",
"license:mit"
] | null | false | creat89 | null | creat89/NER_FEDA_Cyrillic1 | 2 | null | transformers | 23,812 | ---
license: mit
language:
- multilingual
- ru
- bg
- mk
- uk
- fi
tags:
- labse
- ner
---
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats:
1. Wikiann (LOC, PER, ORG)
2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO)
3. SlavNER 17 (LOC, MISC, ORG, PER)
4. CNE5 (GEOPOLIT, LOC, MEDIA, PER, ORG)
5. FactRuEval (LOC, ORG, PER)
6. NER-UK (LOC, MISC, ORG, PER)
7. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME)
PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date, GEOPOLIT: Geopolitical,
You can select the tagset to use in the output by configuring the model.
More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA). |
creat89/NER_FEDA_Cyrillic2 | 764c362a60c32c6599b2c852b97bb447869e702c | 2022-04-13T09:09:14.000Z | [
"pytorch",
"bert",
"multilingual",
"ru",
"bg",
"mk",
"uk",
"fi",
"transformers",
"labse",
"ner",
"license:mit"
] | null | false | creat89 | null | creat89/NER_FEDA_Cyrillic2 | 2 | null | transformers | 23,813 | ---
license: mit
language:
- multilingual
- ru
- bg
- mk
- uk
- fi
tags:
- labse
- ner
---
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats:
1. Wikiann (LOC, PER, ORG)
2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO)
3. SlavNER 17 (LOC, MISC, ORG, PER)
4. CNE5 (GEOPOLIT, LOC, MEDIA, PER, ORG)
5. FactRuEval (LOC, ORG, PER)
6. NER-UK (LOC, MISC, ORG, PER)
7. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME)
PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date, GEOPOLIT: Geopolitical,
You can select the tagset to use in the output by configuring the model. This models manages differently uppercase words.
More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA). |
creat89/NER_FEDA_Latin1 | 929ca8fe0713909d777c9c2e50ef6aa36154b2c8 | 2022-04-13T09:02:03.000Z | [
"pytorch",
"bert",
"multilingual",
"cs",
"pl",
"sl",
"fi",
"transformers",
"labse",
"ner",
"license:mit"
] | null | false | creat89 | null | creat89/NER_FEDA_Latin1 | 2 | null | transformers | 23,814 | ---
license: mit
language:
- multilingual
- cs
- pl
- sl
- fi
tags:
- labse
- ner
---
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats:
1. Wikiann (LOC, PER, ORG)
2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO)
3. SlavNER 17 (LOC, MISC, ORG, PER)
4. SSJ500k (LOC, MISC, ORG, PER)
5. KPWr (EVT, LOC, ORG, PER, PRO)
6. CNEC (LOC, ORG, MEDIA, ART, PER, TIME)
7. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME)
PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date
You can select the tagset to use in the output by configuring the model.
More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA).
|
creat89/NER_FEDA_Latin2 | d432ceebacdec7f7eba6d0a9e5dec84b5206ee83 | 2022-04-13T09:03:00.000Z | [
"pytorch",
"bert",
"multilingual",
"cs",
"pl",
"sl",
"fi",
"transformers",
"labse",
"ner",
"license:mit"
] | null | false | creat89 | null | creat89/NER_FEDA_Latin2 | 2 | null | transformers | 23,815 | ---
license: mit
language:
- multilingual
- cs
- pl
- sl
- fi
tags:
- labse
- ner
---
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats:
1. Wikiann (LOC, PER, ORG)
2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO)
3. SlavNER 17 (LOC, MISC, ORG, PER)
4. SSJ500k (LOC, MISC, ORG, PER)
5. KPWr (EVT, LOC, ORG, PER, PRO)
6. CNEC (LOC, ORG, MEDIA, ART, PER, TIME)
7. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME)
PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date
You can select the tagset to use in the output by configuring the model. This model manages differently uppercase words.
More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA). |
creat89/NER_FEDA_Pl | 58ded451efed1045143dfbebecbf77c2e6da8014 | 2022-04-13T09:37:07.000Z | [
"pytorch",
"bert",
"pl",
"transformers",
"polish_bert",
"ner",
"license:mit"
] | null | false | creat89 | null | creat89/NER_FEDA_Pl | 2 | null | transformers | 23,816 | ---
license: mit
language:
- pl
tags:
- polish_bert
- ner
---
This is a Polish NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on Polish BERT and supports different tagsets all using IOBES formats:
1. Wikiann (LOC, PER, ORG)
2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO)
5. KPWr (EVT, LOC, ORG, PER, PRO)
6. NKJP (DATE, GEOPOLIT, LOC, ORG, PER, TIME)
PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date, GEOPOLIT: Geopolitical
You can select the tagset to use in the output by configuring the model. This model manages differently uppercase words.
More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA). |
creat89/NER_FEDA_Uk | 2769989d0add76fddbd43964faa2d4bf0cc1732f | 2022-04-13T09:29:36.000Z | [
"pytorch",
"bert",
"multilingual",
"uk",
"transformers",
"labse",
"ner",
"license:mit"
] | null | false | creat89 | null | creat89/NER_FEDA_Uk | 2 | null | transformers | 23,817 | ---
license: mit
language:
- multilingual
- uk
tags:
- labse
- ner
---
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats:
1. Wikiann (LOC, PER, ORG)
2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO)
3. NER-UK (LOC, MISC, ORG, PER)
4. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME)
PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date, GEOPOLIT: Geopolitical,
You can select the tagset to use in the output by configuring the model. This models manages differently uppercase words.
More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA).
|
creynier/wav2vec2-base-swbd-turn-small-3 | bca0b45d6041f51bc33e1812b8bd66812b33f4c6 | 2022-02-28T16:21:34.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | creynier | null | creynier/wav2vec2-base-swbd-turn-small-3 | 2 | null | transformers | 23,818 | Entry not found |
csarron/clip-vit-base-patch16 | 90d892986e8b01839362119175cfea01052103d0 | 2022-02-05T22:36:40.000Z | [
"pytorch",
"clip_vision_model",
"transformers"
] | null | false | csarron | null | csarron/clip-vit-base-patch16 | 2 | null | transformers | 23,819 | Entry not found |
csikasote/wav2vec2-large-xls-r-1b-bemba-fds | 4f945c9edb8663e074c67ee702364ef240e68f6f | 2022-02-11T12:28:52.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"bem",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | csikasote | null | csikasote/wav2vec2-large-xls-r-1b-bemba-fds | 2 | null | transformers | 23,820 | ---
license: apache-2.0
tags:
- generated_from_trainer
- bem
- robust-speech-event
model-index:
- name: wav2vec2-large-xls-r-1b-bemba-fds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-bemba-fds
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the [BembaSpeech](https://github.com/csikasote/BembaSpeech) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2898
- Wer: 0.3435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.7986 | 0.34 | 500 | 0.4549 | 0.7292 |
| 0.5358 | 0.67 | 1000 | 0.3325 | 0.4491 |
| 0.4559 | 1.01 | 1500 | 0.3090 | 0.3954 |
| 0.3983 | 1.35 | 2000 | 0.3067 | 0.4105 |
| 0.4067 | 1.68 | 2500 | 0.2838 | 0.3678 |
| 0.3722 | 2.02 | 3000 | 0.2824 | 0.3762 |
| 0.3286 | 2.36 | 3500 | 0.2810 | 0.3670 |
| 0.3239 | 2.69 | 4000 | 0.2643 | 0.3501 |
| 0.3187 | 3.03 | 4500 | 0.2838 | 0.3754 |
| 0.2801 | 3.36 | 5000 | 0.2815 | 0.3507 |
| 0.2806 | 3.7 | 5500 | 0.2725 | 0.3486 |
| 0.2714 | 4.04 | 6000 | 0.2898 | 0.3435 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
cutiebunny639/DialoGPT-small-harry | 24b6b2d150b7bae435b857dda6b8e5150f1f7293 | 2021-12-20T06:12:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | cutiebunny639 | null | cutiebunny639/DialoGPT-small-harry | 2 | null | transformers | 23,821 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
cwitcate/mymodel1001 | 43b7baeeb529936c644a6f7f742e1f7b99f5c17e | 2021-11-02T09:23:04.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cwitcate | null | cwitcate/mymodel1001 | 2 | null | transformers | 23,822 | Entry not found |
cwtpc/wangchanberta-ner-8989 | cd933ad5c47fc68a69790be752b4cf29322db23f | 2022-02-15T03:48:11.000Z | [
"pytorch",
"camembert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | cwtpc | null | cwtpc/wangchanberta-ner-8989 | 2 | null | transformers | 23,823 | ## Hello World |
cyclone/cyclone-ner | 0ac1109c2f81d24c69a8a9891100e4e9b18284bc | 2021-09-29T10:34:26.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | cyclone | null | cyclone/cyclone-ner | 2 | null | transformers | 23,824 | ## Cyclone Chinese NER
This model provides simplified Chinese NER model based on pretrained model BERT (specifically BERT + CRF)
Currently, we only support 8 general type of entities ("address", "company", "government", "name", "organization", "position", "scene", "time")
### Usage
from transformers import BertConfig
config = BertConfig.from_pretrained("bert-base-chinese", num_labels=num_labels)
model_path = "cyclone/cyclone-ner"
tokenizer = CNerTokenizer.from_pretrained(model_path, do_lower_case=True)
model = BertCrfForNer.from_pretrained(model_path, config=config)
|
cyl/adapter_t5-3b_mnli | a616aeb69841011774439aa26df9d0996bb0f9fb | 2022-02-15T16:50:11.000Z | [
"pytorch",
"transformers"
] | null | false | cyl | null | cyl/adapter_t5-3b_mnli | 2 | null | transformers | 23,825 | Entry not found |
cyl/adapter_t5-3b_rte | 40594200db13bed0c645332d4d9531ac8159dac1 | 2022-02-22T11:36:32.000Z | [
"pytorch",
"transformers"
] | null | false | cyl | null | cyl/adapter_t5-3b_rte | 2 | null | transformers | 23,826 | Entry not found |
d42kw01f/Tamil-RoBERTa | df391c767fb008f02c5601062e4f18a639f58a1f | 2021-11-09T16:04:44.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | d42kw01f | null | d42kw01f/Tamil-RoBERTa | 2 | null | transformers | 23,827 | # Description:
This is a smaller per-trained model on Tamil Language using Masked Language Modeling(MLM). And the model is trained on Oscar Tamil dataset.
# How to Use:
The model can be used directly with a pipeline for masked language modeling:
```python
>>> from transformers import AutoTokenizer, AutoModelForMaskedLM, pipeline
>>> tokenizer = AutoTokenizer.from_pretrained("d42kw01f/Tamil-RoBERTa")
>>> model = AutoModelForMaskedLM.from_pretrained("d42kw01f/Tamil-RoBERTa")
>>> fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
>>> fill_mask("நான் வீட்டு <mask>.")
``` |
damien-ir/kosentelectra-generator-v4 | 5b84767f67475ffeefbab3e58baf745a47759748 | 2020-09-29T07:56:07.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | damien-ir | null | damien-ir/kosentelectra-generator-v4 | 2 | null | transformers | 23,828 | Entry not found |
danhsf/mt5-small-finetuned-hi-to-en | d73f43fe7d889adf60f7e1a58ec4d461f0002bfd | 2021-11-30T01:29:56.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | danhsf | null | danhsf/mt5-small-finetuned-hi-to-en | 2 | null | transformers | 23,829 | Entry not found |
danny481/DialoGPT-small-datnguyenchatbot | 8f5b3256e0d6f332f213a8eeaff382e3d1319f74 | 2021-12-29T11:41:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | danny481 | null | danny481/DialoGPT-small-datnguyenchatbot | 2 | null | transformers | 23,830 | ---
tags:
- conversational
---
#datnguyen |
danny911kr/calm-mix-large | 02143ab74b2e3b31b0eb324cc682a4782567626d | 2021-09-16T07:23:19.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | danny911kr | null | danny911kr/calm-mix-large | 2 | null | transformers | 23,831 | ## CALM
This model is for ICLR2021 paper: [Pre-training Text-to-Text Transformers for Concept-centric Common Sense](https://openreview.net/forum?id=3k20LAiHYL2).
Checkout our [Project website](https://inklab.usc.edu/calm-project) for details!
```bibtex
@inproceedings{CALM2021,
title={Pre-training Text-to-Text Transformers for Concept-centric Common Sense},
author={Wangchunshu Zhou and Dong-Ho Lee and Ravi Kiran Selvam and Seyeon Lee and Bill Yuchen Lin and Xiang Ren},
booktitle={ICLR},
year={2021}
}
``` |
danurahul/Eddie_neo_j6 | 03bdd3137b24c0c4b52bedda69451fd848790124 | 2021-06-17T04:38:06.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | danurahul | null | danurahul/Eddie_neo_j6 | 2 | null | transformers | 23,832 | Entry not found |
danurahul/alex-gpt-L | 06752b61b53c5ee6bd6a6a0e729ce8492a8df160 | 2021-05-21T15:13:43.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | danurahul | null | danurahul/alex-gpt-L | 2 | null | transformers | 23,833 | Entry not found |
danurahul/doc2txt_model2 | 70555e11978cd3a19ba071805de488a37c57538f | 2021-05-21T15:21:33.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | danurahul | null | danurahul/doc2txt_model2 | 2 | null | transformers | 23,834 | Entry not found |
danurahul/ghosh_dentist_med | e3a4023916e85e046a8bcfcc77cb7d1824599dd2 | 2021-07-07T11:48:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | danurahul | null | danurahul/ghosh_dentist_med | 2 | null | transformers | 23,835 | Entry not found |
danurahul/yoav_gpt_neo1.3B_delimiter | f52ae73523fc116f927829beb25fb2b3ab9e0a08 | 2021-06-19T02:27:20.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | danurahul | null | danurahul/yoav_gpt_neo1.3B_delimiter | 2 | null | transformers | 23,836 | Entry not found |
daqiao202/distilgpt2-finetuned-wikitext2 | eed7fc6bb25aecba943743b1c75fb9d03a3cd70d | 2021-11-16T02:28:45.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | daqiao202 | null | daqiao202/distilgpt2-finetuned-wikitext2 | 2 | null | transformers | 23,837 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
darthboii/DialoGPT-small-Rick | 308ce0fca4fd8171aec0b8cc980a731db0d96e2c | 2021-09-15T11:11:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | darthboii | null | darthboii/DialoGPT-small-Rick | 2 | null | transformers | 23,838 | ---
tags:
- conversational
---
# Rick DialoGPT Model |
davidcechak/CDNA_bert_6 | f655737c9eeca9c93a8fe81bb6ffb2551e1453b0 | 2022-01-25T17:12:51.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | davidcechak | null | davidcechak/CDNA_bert_6 | 2 | null | transformers | 23,839 | Entry not found |
dbmdz/electra-base-italian-mc4-cased-discriminator | ebb782a9c3a6bd5059b107d70a18dd17516be089 | 2021-08-23T21:39:18.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"transformers"
] | null | false | dbmdz | null | dbmdz/electra-base-italian-mc4-cased-discriminator | 2 | 1 | transformers | 23,840 | Entry not found |
dbmdz/electra-base-italian-mc4-cased-generator | 96c044c8ebbb5a607ab1633d0713414155c68f1b | 2021-08-23T21:47:11.000Z | [
"pytorch",
"tf",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dbmdz | null | dbmdz/electra-base-italian-mc4-cased-generator | 2 | null | transformers | 23,841 | Entry not found |
ddobokki/vit-kogpt_trinity-coco-ko | 2e8745b5b00189c63a4e752a3c2a434fe6a6f9db | 2021-12-16T03:45:07.000Z | [
"pytorch",
"vision-encoder-decoder",
"transformers"
] | null | false | ddobokki | null | ddobokki/vit-kogpt_trinity-coco-ko | 2 | null | transformers | 23,842 | Entry not found |
deepakvk/distilbert-base-uncased-distilled-squad-finetuned-squad | acc82c84d3a4890219183be5c6c4c706f041a379 | 2022-02-25T08:04:27.000Z | [
"pytorch",
"distilbert",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | deepakvk | null | deepakvk/distilbert-base-uncased-distilled-squad-finetuned-squad | 2 | null | transformers | 23,843 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-distilled-squad-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-squad-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased-distilled-squad](https://huggingface.co/distilbert-base-uncased-distilled-squad) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.1
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
deepset/tapas-large-nq-reader | 9eabad16555e448b11b4f3cee0d95bae420b2faa | 2022-01-23T14:59:07.000Z | [
"pytorch",
"tapas",
"en",
"transformers",
"license:apache-2.0"
] | null | false | deepset | null | deepset/tapas-large-nq-reader | 2 | null | transformers | 23,844 | ---
language: en
tags:
- tapas
license: apache-2.0
---
This model contains the converted PyTorch checkpoint of the original Tensorflow model available in the [TaPas repository](https://github.com/google-research/tapas/blob/master/DENSE_TABLE_RETRIEVER.md#reader-models).
It is described in Herzig et al.'s (2021) [paper](https://aclanthology.org/2021.naacl-main.43/) _Open Domain Question Answering over Tables via Dense Retrieval_.
This model has 2 versions which can be used differing only in the table scoring head.
The default one has an adapted table scoring head in order to be able to generate probabilities out of the logits.
The other (non-default) version corredponds to the original checkpoint from the TaPas repository and can be accessed setting `revision="original"`.
# Usage
## In Haystack
If you want to use this model for question-answering over tables, you can load it in [Haystack](https://github.com/deepset-ai/haystack/):
```python
from haystack.nodes import TableReader
table_reader = TableReader(model_name_or_path="deepset/tapas-large-nq-reader")
```
|
deeq/delectra-generator | 28f17a597444ab9c4b2cc53f5dabf2994bb187f1 | 2021-07-23T04:31:46.000Z | [
"pytorch",
"electra",
"fill-mask",
"ko",
"dataset:kowiki",
"dataset:news",
"transformers",
"autotrain_compatible"
] | fill-mask | false | deeq | null | deeq/delectra-generator | 2 | null | transformers | 23,845 | ---
language: ko
datasets:
- kowiki
- news
---
deeqELECTRA-base
---
- model: electra-base-generator
- vocab: bert-wordpiece, 35k
- version: beta, 1.71M
|
demdecuong/stroke_simcse | ab1128ca88b3f7b9f24b20ef82e0c307b44fabe5 | 2021-05-31T13:59:11.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.08821",
"transformers"
] | feature-extraction | false | demdecuong | null | demdecuong/stroke_simcse | 2 | null | transformers | 23,846 | This is finetune version of [SimCSE: Simple Contrastive Learning of Sentence Embeddings](https://arxiv.org/abs/2104.08821)
, train unsupervised on 570K stroke sentences from : stroke books, quora medical, quora's stroke and human annotates.
### Extract sentence representation
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("demdecuong/stroke_simcse")
model = AutoModel.from_pretrained("demdecuong/stroke_simcse")
text = "What are disease related to red stroke's causes?"
inputs = tokenizer(text, return_tensors='pt')
outputs = model(**inputs)[1]
```
### Build up embedding for database
```
database = [
'What is the daily checklist for stroke returning home',
'What are some tips for stroke adapt new life',
'What should I consider when using nursing-home care'
]
embedding = torch.zeros((len(database),768))
for i in range(len(database)):
inputs = tokenizer(database[i], return_tensors="pt")
outputs = model(**inputs)[1]
embedding[i] = outputs
print(embedding.shape)
```
### Result
On our Poc testset , which contains pairs of matching question related to stroke from human-generated.
| Model | Top-1 Accuracy |
| ------------- | ------------- |
| SimCSE (supervised) | 75.83 |
| SimCSE (ours) | 76.66 | |
denden/iloko_model | 0114a9b3ca90252ddd5071a261c6f638186ee0e2 | 2021-11-04T10:24:55.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | denden | null | denden/iloko_model | 2 | null | transformers | 23,847 | ---
license: apache-2.0
tags:
- generated_from_trainer
pipeline_tag: automatic-speech-recognition
model-index:
name: iloko_model
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# iloko_model
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0095
- Wer: 0.0840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.2784 | 1.11 | 100 | 2.9875 | 1.0 |
| 2.6899 | 2.22 | 200 | 2.6741 | 1.0 |
| 2.6177 | 3.33 | 300 | 2.6516 | 1.0 |
| 2.5327 | 4.44 | 400 | 2.4530 | 1.0 |
| 0.8653 | 5.56 | 500 | 0.5227 | 0.6547 |
| 0.3414 | 6.67 | 600 | 0.1830 | 0.2487 |
| 0.2299 | 7.78 | 700 | 0.1212 | 0.1877 |
| 0.1739 | 8.89 | 800 | 0.0843 | 0.1441 |
| 0.1242 | 10.0 | 900 | 0.0766 | 0.1441 |
| 0.1116 | 11.11 | 1000 | 0.0530 | 0.1145 |
| 0.0861 | 12.22 | 1100 | 0.0442 | 0.1047 |
| 0.1007 | 13.33 | 1200 | 0.0379 | 0.1023 |
| 0.0613 | 14.44 | 1300 | 0.0291 | 0.1006 |
| 0.0629 | 15.56 | 1400 | 0.0264 | 0.0961 |
| 0.047 | 16.67 | 1500 | 0.0238 | 0.0935 |
| 0.0797 | 17.78 | 1600 | 0.0226 | 0.0913 |
| 0.034 | 18.89 | 1700 | 0.0197 | 0.0893 |
| 0.0485 | 20.0 | 1800 | 0.0173 | 0.0905 |
| 0.0402 | 21.11 | 1900 | 0.0148 | 0.0902 |
| 0.0231 | 22.22 | 2000 | 0.0135 | 0.0891 |
| 0.0512 | 23.33 | 2100 | 0.0134 | 0.0861 |
| 0.0181 | 24.44 | 2200 | 0.0118 | 0.0842 |
| 0.0371 | 25.56 | 2300 | 0.0116 | 0.0867 |
| 0.0342 | 26.67 | 2400 | 0.0104 | 0.0863 |
| 0.0344 | 27.78 | 2500 | 0.0100 | 0.0850 |
| 0.0182 | 28.89 | 2600 | 0.0096 | 0.0839 |
| 0.0171 | 30.0 | 2700 | 0.0095 | 0.0840 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
denden/new_iloko | d2370a1065791863f6459677e58e2ba7c7e65a47 | 2021-11-04T02:28:48.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:timit_asr",
"transformers",
"audio",
"speech",
"license:academic free license v3.0",
"model-index"
] | automatic-speech-recognition | false | denden | null | denden/new_iloko | 2 | null | transformers | 23,848 | ---
language:
- en
license: Academic Free License v3.0
tags:
- audio # Example: audio
- automatic-speech-recognition # Example: automatic-speech-recognition
- speech # Example: speech
pipeline_tag: automatic-speech-recognition
datasets:
- timit_asr # Example: common_voice. Use dataset id from https://hf.co/datasets
metrics:
- wer
# Optional. Add this if you want to encode your eval results in a structured way.
model-index:
- name: iloko-model
results:
- task:
type: automatic-speech-recognition # Required. Example: automatic-speech-recognition
name: Iloko Speech Recognition # Optional. Example: Speech Recognition
metrics:
- type: wer # Required. Example: wer
value: 0.009 # Required. Example: 20.90
name: TEST WETR # Optional. Example: Test WER
# args: {arg_0} # Optional. Example for BLEU: max_order
---
FINETUNED ILOKANO SPEECH RECOGNITION FROM WAV2VEC-XLSR-S3 |
deokisys/BCtest | 096fe9f7e76a5f28437ad24616b29247a1ec33a8 | 2021-05-19T15:38:38.000Z | [
"pytorch",
"jax",
"bert",
"pretraining",
"transformers"
] | null | false | deokisys | null | deokisys/BCtest | 2 | null | transformers | 23,849 | Entry not found |
df4rfrrf/DialoGPT-medium-Aerith | a6b0adf14e7b54373ec7bd283727c0a8c563ef4d | 2021-09-02T11:37:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | df4rfrrf | null | df4rfrrf/DialoGPT-medium-Aerith | 2 | null | transformers | 23,850 | ---
tags:
- conversational
---
#Aerith GPT model |
diegozs97/chemprot-seed-0-1000k | 9d16bffd3e1465251667e592aee8f08a575ea746 | 2021-12-07T01:03:09.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-0-1000k | 2 | null | transformers | 23,851 | Entry not found |
diegozs97/chemprot-seed-0-1500k | 22c4503af68ab41f150adb5e8d77b5c46b352ef7 | 2021-12-07T00:11:10.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-0-1500k | 2 | null | transformers | 23,852 | Entry not found |
diegozs97/chemprot-seed-0-200k | 5ff34dac86011874f87863fe6bf18bfddf03d632 | 2021-12-06T23:41:25.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-0-200k | 2 | null | transformers | 23,853 | Entry not found |
diegozs97/chemprot-seed-0-60k | 8a9f31c14bba8cf6a5438c140189dea86b2d5a9d | 2021-12-06T23:31:10.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-0-60k | 2 | null | transformers | 23,854 | Entry not found |
diegozs97/chemprot-seed-0-700k | ed139de422c8233f8ff31fc4d50625bd674f9282 | 2021-12-06T23:51:27.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-0-700k | 2 | null | transformers | 23,855 | Entry not found |
diegozs97/chemprot-seed-1-2000k | 657b1bcfbfe11c67542280388bb234c3f162496a | 2021-12-07T01:31:00.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-1-2000k | 2 | null | transformers | 23,856 | Entry not found |
diegozs97/chemprot-seed-1-200k | 27177dbc8aa79802642cc48956a8b9cf5abd3141 | 2021-12-07T00:56:27.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-1-200k | 2 | null | transformers | 23,857 | Entry not found |
diegozs97/chemprot-seed-1-400k | c6d896da01e396c99b0634d9f009d7a1e9220457 | 2021-12-07T01:02:03.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-1-400k | 2 | null | transformers | 23,858 | Entry not found |
diegozs97/chemprot-seed-2-2000k | bbcf72e940867233afa27e5a7becb2ad45d1f625 | 2021-12-07T03:52:57.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-2-2000k | 2 | null | transformers | 23,859 | Entry not found |
diegozs97/chemprot-seed-2-60k | 5ecde83f4bef1db0d06418cc541a113e49011cac | 2021-12-07T02:59:21.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-2-60k | 2 | null | transformers | 23,860 | Entry not found |
diegozs97/chemprot-seed-3-1500k | c1b6776ffad16451c14a8244598cdcd29281338b | 2021-12-07T06:24:15.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-3-1500k | 2 | null | transformers | 23,861 | Entry not found |
diegozs97/chemprot-seed-3-2000k | 55f53839fb137c48f4f2e0f427838a3028d175a8 | 2021-12-07T06:35:46.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-3-2000k | 2 | null | transformers | 23,862 | Entry not found |
diegozs97/chemprot-seed-4-400k | b68d78505b10072ee942aad1f90e5163f3597cd6 | 2021-12-07T16:34:21.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-4-400k | 2 | null | transformers | 23,863 | Entry not found |
diegozs97/sciie-seed-0-200k | 2e1f3142851309fe25295964a535ade4bafd9ead | 2021-12-08T21:34:15.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-0-200k | 2 | null | transformers | 23,864 | Entry not found |
diegozs97/sciie-seed-0-60k | f4bc4c6b76ad3bb6fb7c0e7140127afe8adcdd95 | 2021-12-08T22:56:54.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-0-60k | 2 | null | transformers | 23,865 | Entry not found |
diegozs97/sciie-seed-0-700k | d3b57395e95522612215e00b3bbb748d07f09e1d | 2021-12-09T13:55:29.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-0-700k | 2 | null | transformers | 23,866 | Entry not found |
diegozs97/sciie-seed-2-0k | 97198db5dbe98079c4d77b981bcd64cc08fcc45a | 2021-12-07T04:19:55.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-2-0k | 2 | null | transformers | 23,867 | Entry not found |
diegozs97/sciie-seed-4-0k | 6126177c67158d3d42ba7dff9cb267dd5918b4cd | 2021-12-07T20:42:11.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-4-0k | 2 | null | transformers | 23,868 | Entry not found |
diegozs97/sciie-seed-4-1000k | f8e5df45f1dd94e501b1c4ee2b2f5cf03fa9fa3f | 2021-12-07T23:21:57.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-4-1000k | 2 | null | transformers | 23,869 | Entry not found |
diegozs97/sciie-seed-4-1500k | f9869b8505f445345a6c40ab4d4275cb1700d21d | 2021-12-07T23:32:24.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-4-1500k | 2 | null | transformers | 23,870 | Entry not found |
diegozs97/sciie-seed-4-200k | 4485d7b9a66e019ca09999930ce54b55854d4a01 | 2021-12-07T21:01:21.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-4-200k | 2 | null | transformers | 23,871 | Entry not found |
diegozs97/sciie-seed-4-20k | 9f44a1e7c5719144c9bff6d0cde3ea13439b6b8e | 2021-12-07T20:46:56.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-4-20k | 2 | null | transformers | 23,872 | Entry not found |
distractedm1nd/wav2vec-en-finetuned-on-cryptocurrency | dcd6195e97868b03a9d073f08d60370f28409ffd | 2021-07-06T01:32:06.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"audio",
"license:mit"
] | automatic-speech-recognition | false | distractedm1nd | null | distractedm1nd/wav2vec-en-finetuned-on-cryptocurrency | 2 | null | transformers | 23,873 | ---
language: en
tags:
- audio
- automatic-speech-recognition
metrics:
- wer
license: mit
---
We took `facebook/wav2vec2-large-960h` and fine tuned it using 1400 audio clips (around 10-15 seconds each) from various cryptocurrency related podcasts. To label the data, we downloaded cryptocurrency podcasts from youtube with their subtitle data and split the clips up by sentence. We then compared the youtube transcription with `facebook/wav2vec2-large-960h` to correct many mistakes in the youtube transcriptions. We can probably achieve better results with more data clean up.
On our data we achieved a WER of 13.1%. `facebook/wav2vec2-large-960h` only reached a WER of 27% on our data.
## Usage
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import soundfile as sf
import torch
# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained("distractedm1nd/wav2vec-en-finetuned-on-cryptocurrency")
model = Wav2Vec2ForCTC.from_pretrained("distractedm1nd/wav2vec-en-finetuned-on-cryptocurrency"
filename = "INSERT_FILENAME"
audio, sampling_rate = sf.read(filename)
input_values = processor(audio, return_tensors="pt", padding="longest", sampling_rate=sampling_rate).input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
tokenizer.batch_decode(predicted_ids
```
|
dkleczek/Polish_BART_base_OPI | c193c8479ce9ae64923aa2197d7d7ee37f31139e | 2021-09-02T14:25:11.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | dkleczek | null | dkleczek/Polish_BART_base_OPI | 2 | null | transformers | 23,874 | Entry not found |
dkssud/wav2vec2-base-demo-colab | a2a6ae2b4f0d0c78d8c827209812b9ea0d52d125 | 2021-12-19T09:54:26.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | dkssud | null | dkssud/wav2vec2-base-demo-colab | 2 | null | transformers | 23,875 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4171
- Wer: 0.3452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0054 | 4.0 | 500 | 1.5456 | 0.9005 |
| 0.8183 | 8.0 | 1000 | 0.4738 | 0.4839 |
| 0.3019 | 12.0 | 1500 | 0.4280 | 0.4047 |
| 0.1738 | 16.0 | 2000 | 0.4584 | 0.3738 |
| 0.1285 | 20.0 | 2500 | 0.4418 | 0.3593 |
| 0.1104 | 24.0 | 3000 | 0.4110 | 0.3479 |
| 0.0828 | 28.0 | 3500 | 0.4171 | 0.3452 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.14.0
- Tokenizers 0.10.3
|
dlarhkd1211/koelectra_adcode | 87a36ef24f7ed1e1a09d1555de0f21171963e699 | 2021-08-11T06:29:17.000Z | [
"pytorch",
"tf"
] | null | false | dlarhkd1211 | null | dlarhkd1211/koelectra_adcode | 2 | null | null | 23,876 | Entry not found |
dmis-lab/biosyn-biobert-bc5cdr-chemical | c0a1b2cf51b39cf353e730198d2a86622ca7e1f7 | 2021-10-25T03:52:02.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | dmis-lab | null | dmis-lab/biosyn-biobert-bc5cdr-chemical | 2 | null | transformers | 23,877 | Entry not found |
donggyu/mnmt_decoder_ko | 1d4eb55545bfcec585e04d8b6783966699ed155a | 2021-12-20T06:22:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | donggyu | null | donggyu/mnmt_decoder_ko | 2 | null | transformers | 23,878 | Entry not found |
doufulai/t5-question-generation-en-model-v1 | e2f521b64ac16f36b28f8dd28dec68bc28ff6b90 | 2021-10-30T11:43:50.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | doufulai | null | doufulai/t5-question-generation-en-model-v1 | 2 | null | transformers | 23,879 | Entry not found |
dpalominop/biobert-giotto | 72a85e7ca3107d6681ce0edb08638e453f505c11 | 2021-05-20T03:25:33.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dpalominop | null | dpalominop/biobert-giotto | 2 | null | transformers | 23,880 | Entry not found |
duongsau/iqtree-similarity | 213917d73042460e4900d56c8cec2a69bd2ead0a | 2021-11-07T21:25:53.000Z | [
"pytorch",
"t5",
"feature-extraction",
"transformers"
] | feature-extraction | false | duongsau | null | duongsau/iqtree-similarity | 2 | null | transformers | 23,881 | Entry not found |
eAsyle/roberta_base_custom_QA | 443a0fb6cc03d003b341944cd8208db69ba7b476 | 2021-08-21T17:47:12.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | eAsyle | null | eAsyle/roberta_base_custom_QA | 2 | null | transformers | 23,882 | Entry not found |
eAsyle/testABSA3 | 3b956a619f8ecf1e498500be356bf6ae51f5a09d | 2021-08-22T16:10:22.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | eAsyle | null | eAsyle/testABSA3 | 2 | null | transformers | 23,883 | Entry not found |
edge2992/dummy-model | 5aab228d43b9feca4c36a932a196ebcbbcfc356b | 2021-12-05T06:45:36.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | edge2992 | null | edge2992/dummy-model | 2 | null | transformers | 23,884 | Entry not found |
edmondz/layoutlmv2-finetuned-funsd-test | 6b50943e8d5ccec54a178452f812ae167f59de05 | 2021-10-18T08:38:55.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | edmondz | null | edmondz/layoutlmv2-finetuned-funsd-test | 2 | null | transformers | 23,885 | Entry not found |
ekkasilina/small_baseline | 670b04bb18f2a0fae1468d0b66c151cbe8b6e785 | 2021-10-26T14:03:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | ekkasilina | null | ekkasilina/small_baseline | 2 | null | transformers | 23,886 | Entry not found |
eldritch-axolotl/Rick | 2ce283c892f013f9b3a1f98d68717cda7c0ec6f4 | 2022-01-19T12:20:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | eldritch-axolotl | null | eldritch-axolotl/Rick | 2 | null | transformers | 23,887 | ---
tags:
- conversational
---
#Rick DialoGPT model |
elgeish/cs224n-squad2.0-distilbert-base-uncased | a535a2603809b69e74872815ac150c61f6485db1 | 2020-12-11T21:39:04.000Z | [
"pytorch",
"distilbert",
"question-answering",
"arxiv:2004.07067",
"transformers",
"autotrain_compatible"
] | question-answering | false | elgeish | null | elgeish/cs224n-squad2.0-distilbert-base-uncased | 2 | null | transformers | 23,888 | ## CS224n SQuAD2.0 Project Dataset
The goal of this model is to save CS224n students GPU time when establishing
baselines to beat for the [Default Final Project](http://web.stanford.edu/class/cs224n/project/default-final-project-handout.pdf).
The training set used to fine-tune this model is the same as
the [official one](https://rajpurkar.github.io/SQuAD-explorer/); however,
evaluation and model selection were performed using roughly half of the official
dev set, 6078 examples, picked at random. The data files can be found at
<https://github.com/elgeish/squad/tree/master/data> — this is the Winter 2020
version. Given that the official SQuAD2.0 dev set contains the project's test
set, students must make sure not to use the official SQuAD2.0 dev set in any way
— including the use of models fine-tuned on the official SQuAD2.0, since they
used the official SQuAD2.0 dev set for model selection.
## Results
```json
{
"exact": 65.16946363935504,
"f1": 67.87348075352251,
"total": 6078,
"HasAns_exact": 69.51890034364261,
"HasAns_f1": 75.16667217179045,
"HasAns_total": 2910,
"NoAns_exact": 61.17424242424242,
"NoAns_f1": 61.17424242424242,
"NoAns_total": 3168,
"best_exact": 65.16946363935504,
"best_exact_thresh": 0.0,
"best_f1": 67.87348075352243,
"best_f1_thresh": 0.0
}
```
## Notable Arguments
```json
{
"do_lower_case": true,
"doc_stride": 128,
"fp16": false,
"fp16_opt_level": "O1",
"gradient_accumulation_steps": 24,
"learning_rate": 3e-05,
"max_answer_length": 30,
"max_grad_norm": 1,
"max_query_length": 64,
"max_seq_length": 384,
"model_name_or_path": "distilbert-base-uncased-distilled-squad",
"model_type": "distilbert",
"num_train_epochs": 4,
"per_gpu_train_batch_size": 32,
"save_steps": 5000,
"seed": 42,
"train_batch_size": 32,
"version_2_with_negative": true,
"warmup_steps": 0,
"weight_decay": 0
}
```
## Environment Setup
```json
{
"transformers": "2.5.1",
"pytorch": "1.4.0=py3.6_cuda10.1.243_cudnn7.6.3_0",
"python": "3.6.5=hc3d631a_2",
"os": "Linux 4.15.0-1060-aws #62-Ubuntu SMP Tue Feb 11 21:23:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux",
"gpu": "Tesla V100-SXM2-16GB"
}
```
## How to Cite
```BibTeX
@misc{elgeish2020gestalt,
title={Gestalt: a Stacking Ensemble for SQuAD2.0},
author={Mohamed El-Geish},
journal={arXiv e-prints},
archivePrefix={arXiv},
eprint={2004.07067},
year={2020},
}
```
## Related Models
* [elgeish/cs224n-squad2.0-albert-base-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-base-v2)
* [elgeish/cs224n-squad2.0-albert-large-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-large-v2)
* [elgeish/cs224n-squad2.0-albert-xxlarge-v1](https://huggingface.co/elgeish/cs224n-squad2.0-albert-xxlarge-v1)
* [elgeish/cs224n-squad2.0-roberta-base](https://huggingface.co/elgeish/cs224n-squad2.0-roberta-base)
|
elgeish/cs224n-squad2.0-roberta-base | 163f9ed159e759182c1d83ca10ab3c2289ad60ef | 2021-05-20T16:16:38.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"arxiv:2004.07067",
"transformers",
"autotrain_compatible"
] | question-answering | false | elgeish | null | elgeish/cs224n-squad2.0-roberta-base | 2 | null | transformers | 23,889 | ## CS224n SQuAD2.0 Project Dataset
The goal of this model is to save CS224n students GPU time when establishing
baselines to beat for the [Default Final Project](http://web.stanford.edu/class/cs224n/project/default-final-project-handout.pdf).
The training set used to fine-tune this model is the same as
the [official one](https://rajpurkar.github.io/SQuAD-explorer/); however,
evaluation and model selection were performed using roughly half of the official
dev set, 6078 examples, picked at random. The data files can be found at
<https://github.com/elgeish/squad/tree/master/data> — this is the Winter 2020
version. Given that the official SQuAD2.0 dev set contains the project's test
set, students must make sure not to use the official SQuAD2.0 dev set in any way
— including the use of models fine-tuned on the official SQuAD2.0, since they
used the official SQuAD2.0 dev set for model selection.
## Results
```json
{
"exact": 75.32082922013821,
"f1": 78.66699523704254,
"total": 6078,
"HasAns_exact": 74.84536082474227,
"HasAns_f1": 81.83436324767868,
"HasAns_total": 2910,
"NoAns_exact": 75.75757575757575,
"NoAns_f1": 75.75757575757575,
"NoAns_total": 3168,
"best_exact": 75.32082922013821,
"best_exact_thresh": 0.0,
"best_f1": 78.66699523704266,
"best_f1_thresh": 0.0
}
```
## Notable Arguments
```json
{
"do_lower_case": true,
"doc_stride": 128,
"fp16": false,
"fp16_opt_level": "O1",
"gradient_accumulation_steps": 24,
"learning_rate": 3e-05,
"max_answer_length": 30,
"max_grad_norm": 1,
"max_query_length": 64,
"max_seq_length": 384,
"model_name_or_path": "roberta-base",
"model_type": "roberta",
"num_train_epochs": 4,
"per_gpu_train_batch_size": 16,
"save_steps": 5000,
"seed": 42,
"train_batch_size": 16,
"version_2_with_negative": true,
"warmup_steps": 0,
"weight_decay": 0
}
```
## Environment Setup
```json
{
"transformers": "2.5.1",
"pytorch": "1.4.0=py3.6_cuda10.1.243_cudnn7.6.3_0",
"python": "3.6.5=hc3d631a_2",
"os": "Linux 4.15.0-1060-aws #62-Ubuntu SMP Tue Feb 11 21:23:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux",
"gpu": "Tesla V100-SXM2-16GB"
}
```
## How to Cite
```BibTeX
@misc{elgeish2020gestalt,
title={Gestalt: a Stacking Ensemble for SQuAD2.0},
author={Mohamed El-Geish},
journal={arXiv e-prints},
archivePrefix={arXiv},
eprint={2004.07067},
year={2020},
}
```
## Related Models
* [elgeish/cs224n-squad2.0-albert-base-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-base-v2)
* [elgeish/cs224n-squad2.0-albert-large-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-large-v2)
* [elgeish/cs224n-squad2.0-albert-xxlarge-v1](https://huggingface.co/elgeish/cs224n-squad2.0-albert-xxlarge-v1)
* [elgeish/cs224n-squad2.0-distilbert-base-uncased](https://huggingface.co/elgeish/cs224n-squad2.0-distilbert-base-uncased)
|
eli4s/prunedBert-L12-h384-A6-finetuned | 72c4e8575af09a1fa657aba75d001b542bf7ba1a | 2021-07-30T10:40:33.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | eli4s | null | eli4s/prunedBert-L12-h384-A6-finetuned | 2 | 2 | transformers | 23,890 | This model was pretrained on the bookcorpus dataset using knowledge distillation.
The particularity of this model is that even though it shares the same architecture as BERT, it has a hidden size of 384 (half the hidden size of BERT) and 6 attention heads (hence the same head size of BERT).
The weights of the model were initialized by pruning the weights of bert-base-uncased.
A knowledge distillation was performed using multiple loss functions to fine-tune the model.
PS : the tokenizer is the same as the one of the model bert-base-uncased.
To load the model \& tokenizer :
````python
from transformers import AutoModelForMaskedLM, BertTokenizer
model_name = "eli4s/prunedBert-L12-h384-A6-finetuned"
model = AutoModelForMaskedLM.from_pretrained(model_name)
tokenizer = BertTokenizer.from_pretrained(model_name)
````
To use it on a sentence :
````python
import torch
sentence = "Let's have a [MASK]."
model.eval()
inputs = tokenizer([sentence], padding='longest', return_tensors='pt')
output = model(inputs['input_ids'], attention_mask=inputs['attention_mask'])
mask_index = inputs['input_ids'].tolist()[0].index(103)
masked_token = output['logits'][0][mask_index].argmax(axis=-1)
predicted_token = tokenizer.decode(masked_token)
print(predicted_token)
````
Or we can also predict the n most relevant predictions :
````python
top_n = 5
vocab_size = model.config.vocab_size
logits = output['logits'][0][mask_index].tolist()
top_tokens = sorted(list(range(vocab_size)), key=lambda i:logits[i], reverse=True)[:top_n]
tokenizer.decode(top_tokens)
```` |
eliasedwin7/MalayalamBERTo | 79ee227a6b8c02b7e213c760346f0c8b60bbb915 | 2021-05-20T16:18:31.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | eliasedwin7 | null | eliasedwin7/MalayalamBERTo | 2 | null | transformers | 23,891 | Entry not found |
eliotm/t5-small-finetuned-en-to-ro-lr0.001 | 9a0e3a680048fdf17334b0dbb5e4b4eb44213c51 | 2021-12-03T01:45:16.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | eliotm | null | eliotm/t5-small-finetuned-en-to-ro-lr0.001 | 2 | null | transformers | 23,892 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-lr0.001
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 5.8837
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-lr0.001
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8309
- Bleu: 5.8837
- Gen Len: 18.2656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.9442 | 1.0 | 7629 | 1.8309 | 5.8837 | 18.2656 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
eliza-dukim/bert-base-multilingual-cased_korquad-v1 | f37aee496157f5ca7df9480d5b222e858fcc1729 | 2021-10-13T16:22:41.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | eliza-dukim | null | eliza-dukim/bert-base-multilingual-cased_korquad-v1 | 2 | null | transformers | 23,893 | ## Boostcamp AI Tech Special Mission 01, Multi-lingual BERT for KorQuAD v1
{'exact_match': 69.89954970557672, 'f1': 77.40349093437989, 'epoch': 15.0} |
elusive-magnolia/dummy-model | 6cd9187db787c8d997094ae8fba8e0d7dc064bc6 | 2021-11-02T16:45:04.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | elusive-magnolia | null | elusive-magnolia/dummy-model | 2 | null | transformers | 23,894 | Entry not found |
emillykkejensen/daT5-large | badbda5a855decd448ec2b88e085ef74f60ef2d8 | 2022-01-06T11:15:26.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"da",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | emillykkejensen | null | emillykkejensen/daT5-large | 2 | null | transformers | 23,895 | ---
language:
- da
license: apache-2.0
---
## daT5-large
A smaller version of [Google's mt5-large](https://huggingface.co/google/mt5-base) model, where the original model is reduced to only include Danish embeddings.
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("emillykkejensen/daT5-large")
model = AutoModel.from_pretrained("emillykkejensen/daT5-large")
```
## Further reading
[Gist](https://gist.github.com/emillykkejensen/8bf1b323495efc7252dee966e6bc1b5c) showing (in Danish) how the embeddings are extracted (for mt5-base)
[Article](https://towardsdatascience.com/how-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90) explaining how to do it by [David Dale](https://huggingface.co/cointegrated)
## Also check out
[daT5-base](https://huggingface.co/emillykkejensen/daT5-base) |
empushy/gpt2-alerts | 685de99642741a071ce2c6aa9475a149a47751f0 | 2021-05-21T15:48:27.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | empushy | null | empushy/gpt2-alerts | 2 | null | transformers | 23,896 | Entry not found |
empushy/gpt2-emulator | 86ef5c098ee559f7348e6a2e819ac38a80180124 | 2021-05-22T19:04:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | empushy | null | empushy/gpt2-emulator | 2 | null | transformers | 23,897 | Entry not found |
emre/wav2vec2-large-xlsr-53-demo-colab | 0f1d6bdf6e19b7c92fb0666be511c673146ffc60 | 2022-01-24T10:54:03.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | emre | null | emre/wav2vec2-large-xlsr-53-demo-colab | 2 | null | transformers | 23,898 | ---
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-53-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3966
- Wer: 0.4834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1516 | 4.21 | 400 | 2.7673 | 1.0 |
| 0.9134 | 8.42 | 800 | 0.4618 | 0.6418 |
| 0.3273 | 12.63 | 1200 | 0.4188 | 0.5535 |
| 0.2252 | 16.84 | 1600 | 0.4144 | 0.5232 |
| 0.1692 | 21.05 | 2000 | 0.3995 | 0.5030 |
| 0.1355 | 25.26 | 2400 | 0.4073 | 0.4920 |
| 0.1172 | 29.47 | 2800 | 0.3966 | 0.4834 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
emre/wav2vec2-xls-r-300m-Br-small | 98f05c20a2f290188b4af43383fe62a88a9439a9 | 2022-03-24T11:55:29.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"br",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | emre | null | emre/wav2vec2-xls-r-300m-Br-small | 2 | null | transformers | 23,899 | ---
license: apache-2.0
language: br
tags:
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-Br-small
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice br
type: common_voice
args: br
metrics:
- name: Test WER
type: wer
value: 66.75
---
# wav2vec2-xls-r-300m-Br-small
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0573
- Wer: 0.6675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.7464 | 2.79 | 400 | 1.7474 | 1.1018 |
| 1.1117 | 5.59 | 800 | 0.9434 | 0.8697 |
| 0.6481 | 8.39 | 1200 | 0.9251 | 0.7910 |
| 0.4754 | 11.19 | 1600 | 0.9208 | 0.7412 |
| 0.3602 | 13.98 | 2000 | 0.9284 | 0.7232 |
| 0.2873 | 16.78 | 2400 | 0.9299 | 0.6940 |
| 0.2386 | 19.58 | 2800 | 1.0182 | 0.6927 |
| 0.1971 | 22.38 | 3200 | 1.0456 | 0.6898 |
| 0.1749 | 25.17 | 3600 | 1.0208 | 0.6769 |
| 0.1487 | 27.97 | 4000 | 1.0573 | 0.6675 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.