modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
mrm8488/electricidad-base-finetuned-ner | 6e759fe1e4ade1e92320f2ead88101b5c9698bcb | 2020-08-24T16:31:14.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | mrm8488 | null | mrm8488/electricidad-base-finetuned-ner | 0 | null | transformers | 35,700 | Entry not found |
mrm8488/electricidad-base-finetuned-pos | db6a864999fdf3ac7f704129c2397a5a32056c89 | 2020-08-24T16:52:15.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | mrm8488 | null | mrm8488/electricidad-base-finetuned-pos | 0 | null | transformers | 35,701 | Entry not found |
mrm8488/roberta-base-bne-finetuned-sqac-retriever | 395e704d6a12cd7beb32d0b244da3bd989490da3 | 2022-02-04T17:59:07.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | mrm8488 | null | mrm8488/roberta-base-bne-finetuned-sqac-retriever | 0 | 1 | sentence-transformers | 35,702 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 939 with parameters:
```
{'batch_size': 16}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 93,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
mrm8488/roberta-base-finetuned-multitask | af2c180d2c07a6654b7277e3ac234da9e79b3ca6 | 2020-06-23T20:13:34.000Z | [
"pytorch",
"transformers"
] | null | false | mrm8488 | null | mrm8488/roberta-base-finetuned-multitask | 0 | null | transformers | 35,703 | Entry not found |
mrm8488/t5-base-finetuned-math-list-prime-factors | ea062d4d455eb9877c01a82135555421fc39f68a | 2021-06-23T12:50:18.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-base-finetuned-math-list-prime-factors | 0 | null | transformers | 35,704 | Entry not found |
mrm8488/t5-base-finetuned-qasc-sc | e405dda22ebfc864ef9212ca76c42e4c2ebb1491 | 2020-11-01T10:04:34.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-base-finetuned-qasc-sc | 0 | null | transformers | 35,705 | Entry not found |
mrm8488/t5-base-finetuned-spa-squadv1 | 336b1a26662bbb63f617ab99b4b3041e530f1f3e | 2020-05-26T13:23:11.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-base-finetuned-spa-squadv1 | 0 | null | transformers | 35,706 | Entry not found |
mrm8488/t5-base-finetuned-swag | a98fc10cdd70c654f3282b8b5778f18e304c445a | 2020-06-18T01:12:47.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-base-finetuned-swag | 0 | null | transformers | 35,707 | Entry not found |
mrm8488/t5-small-finetuned-AESLC-summarization | c310383b4c337edcffdd1e9a875df965a2ea731e | 2020-07-22T08:42:23.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-small-finetuned-AESLC-summarization | 0 | null | transformers | 35,708 | Entry not found |
mrm8488/wav2vec2-large-xlsr-53-breton | d1cd1ab90a8e1270e676d28503ef363cb91cabf5 | 2021-07-06T12:57:34.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"br",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | mrm8488 | null | mrm8488/wav2vec2-large-xlsr-53-breton | 0 | null | transformers | 35,709 | ---
language: br
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Breton Manuel Romero
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice br
type: common_voice
args: br
metrics:
- name: Test WER
type: wer
value: 46.49
---
# Wav2Vec2-Large-XLSR-53-breton
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Breton using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "br", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-breton")
model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-breton")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Breton test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "br", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-breton")
model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-breton")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 46.49 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found ???
|
mrm8488/wav2vec2-large-xlsr-53-esperanto | fd2f793058ef80ece5aa0c635c60fbf6a5fe100c | 2021-07-06T13:02:46.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"eo",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | mrm8488 | null | mrm8488/wav2vec2-large-xlsr-53-esperanto | 0 | null | transformers | 35,710 | ---
language: eo
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Esperanto Manuel Romero
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice eo
type: common_voice
args: eo
metrics:
- name: Test WER
type: wer
value: 15.86
---
# Wav2Vec2-Large-XLSR-53-esperanto
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Esperanto using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "eo", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-esperanto")
model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-esperanto")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Ukrainian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "eo", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-esperanto")
model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-esperanto")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 15.86 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found ??? |
mrm8488/wav2vec2-large-xlsr-53-euskera | 89afb3abbc14de3f07dea8484e2b8a4276329ffd | 2021-07-06T13:09:18.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"eu",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | mrm8488 | null | mrm8488/wav2vec2-large-xlsr-53-euskera | 0 | null | transformers | 35,711 | ---
language: eu
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Euskera Manuel Romero
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice eu
type: common_voice
args: eu
metrics:
- name: Test WER
type: wer
value: 24.03
---
# Wav2Vec2-Large-XLSR-53-euskera
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Euskera using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "eu", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-euskera")
model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-euskera")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Euskera test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "eu", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-euskera")
model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-euskera")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 24.03 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found ???
|
mrojas/bio-bert-base-spanish-wwm-cased | 19a4c7ed6d68d34d0d920da92ffe1ba326d96d5d | 2022-02-07T00:52:53.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mrojas | null | mrojas/bio-bert-base-spanish-wwm-cased | 0 | null | transformers | 35,712 | Entry not found |
msakthiganesh/TabQGen-Base | 731b5bb2f6b6db089f138977354ef56a50ca84d0 | 2021-08-18T14:38:06.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | msakthiganesh | null | msakthiganesh/TabQGen-Base | 0 | null | transformers | 35,713 | > **TabQGen** model is released along with the dataset **Question Generation for Tables** in the paper - **Answer-Aware Question Generation from Tabular and Textual Data using T5**
|
msakthiganesh/TabQGen-Large | f67493959357b2f6ca8b0bb1a0e421b04304b087 | 2021-08-18T14:37:35.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | msakthiganesh | null | msakthiganesh/TabQGen-Large | 0 | null | transformers | 35,714 | > **TabQGen** model is released along with the dataset **Question Generation for Tables** in the paper - **Answer-Aware Question Generation from Tabular and Textual Data using T5**
|
msarmi9/multi30k | ce65571f421c2a11ab8252046973dcac0f975ed0 | 2022-02-22T23:28:58.000Z | [
"tensorboard",
"de",
"en",
"dataset:multi30k",
"translation",
"pytorch",
"license:mit",
"model-index"
] | translation | false | msarmi9 | null | msarmi9/multi30k | 0 | 2 | null | 35,715 | ---
license: mit
language:
- de
- en
tags:
- translation
- pytorch
datasets:
- multi30k
metrics:
- bleu
model-index:
- name: multi30k
results:
- task:
type: translation
dataset:
type: multi30k
name: multi30k-de-en
metrics:
- type: bleu
value: 33.468
name: Test BLEU
args: n_gram=4
---
# Seq2seq + Attention
Pytorch implementation of [Neural Machine Translation by Jointly Learning to Align and Translate](https://arxiv.org/abs/1409.0473). Trained on the [Multi30k-de-en](http://www.statmt.org/wmt16/multimodal-task.html#task1) dataset with sentencepiece as the tokenizer.
Here's the attention heatmap of a random sample from the test set:

|
mse30/bart-base-finetuned-cnn | 5d3bf17f7f5ceb51524af0c99f2308aa14dc3569 | 2021-10-09T01:50:16.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mse30 | null | mse30/bart-base-finetuned-cnn | 0 | null | transformers | 35,716 | Entry not found |
mtr0930/koelectra-base-v3_epoch-10 | 0fedbc0e700e4b501207c64c769571068102aab4 | 2021-08-18T17:33:01.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mtr0930 | null | mtr0930/koelectra-base-v3_epoch-10 | 0 | null | transformers | 35,717 | i-manual KoELECTRA-base-v3 |
mtr0930/koelectra-base-v3_epoch-100 | 311fe8e2885b069bc8c36f001fc3264e7d438904 | 2021-08-23T12:08:35.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mtr0930 | null | mtr0930/koelectra-base-v3_epoch-100 | 0 | null | transformers | 35,718 | Entry not found |
mukherjeearnab/opsolBERT | f78218028511cc6101c9e4683ac4d35e6a5c8ab6 | 2021-05-20T18:38:11.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mukherjeearnab | null | mukherjeearnab/opsolBERT | 0 | null | transformers | 35,719 | hello
|
munggok/xlsr_indonesia | 266eccdd703db568c13a336490c981f4ab3f1cad | 2021-03-18T09:53:35.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"id",
"dataset:common_voice",
"transformers",
"speech",
"audio",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | false | munggok | null | munggok/xlsr_indonesia | 0 | null | transformers | 35,720 | ---
language: id
datasets:
- common_voice
tags:
- speech
- audio
- automatic-speech-recognition
- xlsr-fine-tuning-week
license: apache-2.0
---
## Evaluation on Common Voice ID Test
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "munggok/xlsr_indonesia"
device = "cuda"
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]' # noqa: W605
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "id", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
**Result**: 25.7 % |
mutamuta/DialoGPT-small-rick | 71254a3400dd930d2a55d971be01b6f0aeee70a6 | 2021-09-22T01:05:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | mutamuta | null | mutamuta/DialoGPT-small-rick | 0 | null | transformers | 35,721 | ---
tags:
- conversational
---
# Rick DialoGPT Model |
mutamuta/DialoGPT-spongebob-small | d356ffb7c160a5fcbb778f2cf1ce731df7b53637 | 2021-09-22T20:06:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | mutamuta | null | mutamuta/DialoGPT-spongebob-small | 0 | null | transformers | 35,722 | ---
tags:
- conversational
---
# SpongeBob DialoGPT Model |
mvip/wav2vec2-large-xls-r-300m-tr | 30c4de4ad3667f26522ebf5bd379c24665c8ba0a | 2022-02-11T10:58:45.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | mvip | null | mvip/wav2vec2-large-xls-r-300m-tr | 0 | null | transformers | 35,723 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-tr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4074
- Wer: 0.4227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.9399 | 4.21 | 400 | 0.7252 | 0.7387 |
| 0.4147 | 8.42 | 800 | 0.4693 | 0.5201 |
| 0.1855 | 12.63 | 1200 | 0.4584 | 0.4848 |
| 0.1256 | 16.84 | 1600 | 0.4464 | 0.4708 |
| 0.0948 | 21.05 | 2000 | 0.4261 | 0.4389 |
| 0.0714 | 25.26 | 2400 | 0.4331 | 0.4349 |
| 0.0532 | 29.47 | 2800 | 0.4074 | 0.4227 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
mvip/wav2vec2-large-xls-r-300m-turkish-local-2 | e5d18f24335c3b725c6ef5bba49d0bf7fc8f04a6 | 2022-02-23T14:27:56.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | mvip | null | mvip/wav2vec2-large-xls-r-300m-turkish-local-2 | 0 | null | transformers | 35,724 | Entry not found |
mvonwyl/roberta-base-finetuned-squad2 | 028722d9913840390f897a26998dbc5e581374c0 | 2021-11-01T17:51:41.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | mvonwyl | null | mvonwyl/roberta-base-finetuned-squad2 | 0 | null | transformers | 35,725 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: roberta-base-finetuned-squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-squad2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.88 | 1.0 | 8160 | 0.8129 |
| 0.6643 | 2.0 | 16320 | 0.8567 |
| 0.5096 | 3.0 | 24480 | 0.9325 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
naiyalee/DialoGPT-small-neku | 541315306340df225664357d7a8458c3210fc92d | 2021-08-06T13:06:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | naiyalee | null | naiyalee/DialoGPT-small-neku | 0 | null | transformers | 35,726 | |
namanrana16/DialoGPT-small-TrumpBot | bab78c77fe2493e4b3faef8ec1ef85122d6c9fd4 | 2021-09-29T16:58:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | namanrana16 | null | namanrana16/DialoGPT-small-TrumpBot | 0 | null | transformers | 35,727 | ---
tags:
- conversational
---
#TRUUMP BOT |
nanometeres/DialoGPT-small-halbot | 946e5efd7038b45a72f4e7761a494f523f1b4afe | 2021-08-28T05:41:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | nanometeres | null | nanometeres/DialoGPT-small-halbot | 0 | null | transformers | 35,728 | ---
tags:
- conversational
---
#lilhalbot DialoGPT Model |
napoler/chinese_roberta_L-2_H-512-8021 | 778bcf957ddd4026b458dd1f441d372753d64420 | 2021-09-26T07:10:44.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | napoler | null | napoler/chinese_roberta_L-2_H-512-8021 | 0 | 1 | transformers | 35,729 | Entry not found |
napoler/chinese_roberta_L-2_H-512_relative_key_query_token_type_100 | 62285452251b918d7b95b4cfb4aea944076d9422 | 2021-09-26T03:21:48.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | napoler | null | napoler/chinese_roberta_L-2_H-512_relative_key_query_token_type_100 | 0 | null | transformers | 35,730 | -修改为相对位置
-对内容类型进行修改
```python
from transformers import BertTokenizer, BertModel,BertConfig
Config = BertConfig.from_pretrained("napoler/chinese_roberta_L-2_H-512_relative_key_query_token_type_100")
tokenizer = BertTokenizer.from_pretrained('napoler/chinese_roberta_L-2_H-512_relative_key_query_token_type_100')
model = BertModel.from_pretrained("napoler/chinese_roberta_L-2_H-512_relative_key_query_token_type_100",config=Config)
```
修改方案
https://www.kaggle.com/terrychanorg/bert-notebook9525623d9e |
napoler/chinese_roberta_L-4_H-512_rdrop | f3d98b8c2ff92d84e50f43de54dcc5bfd1d7ddf5 | 2022-01-22T01:00:59.000Z | [
"pytorch",
"bert",
"feature-extraction",
"Chinese",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | napoler | null | napoler/chinese_roberta_L-4_H-512_rdrop | 0 | null | transformers | 35,731 | ---
license: "apache-2.0"
language: Chinese
widget:
- text: "北京是[MASK]国的首都。"
---
chinese_roberta_L-4_H-512_rdrop |
napoler/mcbert_l3_768 | 0c729154901cb1862355cc67629fb9c8a9708fc7 | 2021-12-01T08:41:47.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | napoler | null | napoler/mcbert_l3_768 | 0 | null | transformers | 35,732 | 3层 |
narabzad/upload | 11ed70da6c8923942f6d8ca60f1c1a32919a1921 | 2020-08-15T16:56:43.000Z | [
"pytorch",
"transformers"
] | null | false | narabzad | null | narabzad/upload | 0 | null | transformers | 35,733 | Entry not found |
nateraw/aae-fraud-base-7 | 2d8a3f7de9e80505721e676257157fcfbd2313a3 | 2021-07-05T09:12:06.000Z | [
"pytorch",
"tensorboard",
"transformers"
] | null | false | nateraw | null | nateraw/aae-fraud-base-7 | 0 | null | transformers | 35,734 | Entry not found |
nateraw/autoencoder-cifar10 | 3c19b7fdd1a0261feb5f3211616fdf3e669f84f1 | 2021-06-30T06:26:34.000Z | [
"pytorch",
"transformers"
] | null | false | nateraw | null | nateraw/autoencoder-cifar10 | 0 | null | transformers | 35,735 | Entry not found |
nateraw/basic-ae-cifar10 | b06a6d68830525b6ce2b15c1930709365a55a2ca | 2021-06-30T02:05:17.000Z | [
"pytorch",
"transformers"
] | null | false | nateraw | null | nateraw/basic-ae-cifar10 | 0 | null | transformers | 35,736 | Entry not found |
nateraw/cnn-dummy | 3cc0d1a11a7ece62036c1e3c020aaea2054dd648 | 2021-09-03T20:39:32.000Z | [
"pytorch"
] | null | false | nateraw | null | nateraw/cnn-dummy | 0 | null | null | 35,737 | Entry not found |
nateraw/dummy | e39e45cf308894b7c6ced409c9df3d7a4baec8c2 | 2021-06-29T06:34:02.000Z | [
"pytorch",
"transformers"
] | null | false | nateraw | null | nateraw/dummy | 0 | null | transformers | 35,738 | Entry not found |
nateraw/resnet101 | a04b7ca0ca356dc1246557c4fda0523768520bdf | 2021-04-13T09:54:57.000Z | [
"pytorch",
"resnet",
"transformers"
] | null | false | nateraw | null | nateraw/resnet101 | 0 | null | transformers | 35,739 | Entry not found |
nateraw/resnet18-dummy | 0d8abf42e371de826225c110a70f35c321ed34bb | 2021-09-08T00:24:38.000Z | [
"pytorch",
"transformers"
] | null | false | nateraw | null | nateraw/resnet18-dummy | 0 | null | transformers | 35,740 | Entry not found |
nateraw/resnet18 | 69868120cb36991d0b1079056060b72251a78a15 | 2021-04-13T10:06:56.000Z | [
"pytorch",
"resnet",
"transformers"
] | null | false | nateraw | null | nateraw/resnet18 | 0 | null | transformers | 35,741 | Entry not found |
nateraw/resnet34 | ef0642dabeaa8d181aed91ea939c63622ce7010a | 2021-04-13T10:09:31.000Z | [
"pytorch",
"resnet",
"transformers"
] | null | false | nateraw | null | nateraw/resnet34 | 0 | null | transformers | 35,742 | Entry not found |
nateraw/resnet50-beans-dummy-sagemaker | d579246c49b86c33caad5a7e81e957d7020d2393 | 2021-09-22T18:01:58.000Z | [
"pytorch",
"tensorboard",
"dataset:beans",
"timm",
"image-classification",
"generated_from_trainer",
"model-index"
] | image-classification | false | nateraw | null | nateraw/resnet50-beans-dummy-sagemaker | 0 | null | timm | 35,743 | ---
tags:
- image-classification
- timm
- generated_from_trainer
datasets:
- beans
model-index:
- name: model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
args: default
library_tag: timm
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [resnet18](https://huggingface.co/resnet18) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0219
- Acc1: 56.3910
- Acc5: 100.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 20
### Training results
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
nateraw/resnext50_32x4d | bbbc875826128bc1105b2bc8a42f17fdd0a21aff | 2021-04-13T10:21:23.000Z | [
"pytorch",
"resnet",
"transformers"
] | null | false | nateraw | null | nateraw/resnext50_32x4d | 0 | null | transformers | 35,744 | Entry not found |
nateraw/test-classifier-flash-2 | 8b2e7f8c8b642d1b0e3e30a31d2b99babe157929 | 2021-09-28T01:57:13.000Z | [
"pytorch",
"generic",
"image-classification"
] | image-classification | false | nateraw | null | nateraw/test-classifier-flash-2 | 0 | null | generic | 35,745 | ---
tags:
- image-classification
library_name: generic
---
# Test |
nateraw/text-classification-flash-demo-2 | def7d7720f1027a0d1ef07867f56b9a01a299b9a | 2021-09-28T04:42:37.000Z | [
"pytorch",
"generic",
"text-classification"
] | text-classification | false | nateraw | null | nateraw/text-classification-flash-demo-2 | 0 | null | generic | 35,746 | ---
tags:
- text-classification
library_name: generic
---
# Test |
nateraw/wide_resnet101_2 | 824b02923c930a28c0ac7653b4ee034f468d1bcb | 2021-04-13T10:26:39.000Z | [
"pytorch",
"resnet",
"transformers"
] | null | false | nateraw | null | nateraw/wide_resnet101_2 | 0 | null | transformers | 35,747 | Entry not found |
nateraw/wide_resnet50_2 | 912799d703816c9e2f6fd8210ffdef7315231dc6 | 2021-04-13T10:42:01.000Z | [
"pytorch",
"resnet",
"transformers"
] | null | false | nateraw | null | nateraw/wide_resnet50_2 | 0 | null | transformers | 35,748 | Entry not found |
nates-test-org/cait_s24_384 | c4ea39213e40cac30a1cbf51ab0b00e6a833e0da | 2021-10-29T04:25:03.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | nates-test-org | null | nates-test-org/cait_s24_384 | 0 | null | timm | 35,749 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for cait_s24_384 |
nates-test-org/cait_s36_384 | 256bf3d611972995458cd768c3151c1ab72f884d | 2021-10-29T04:27:40.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | nates-test-org | null | nates-test-org/cait_s36_384 | 0 | null | timm | 35,750 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for cait_s36_384 |
nates-test-org/coat_lite_mini | 9547fd30b2e8063fddf1f7c17044f8b8e857796e | 2021-10-29T04:36:52.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | nates-test-org | null | nates-test-org/coat_lite_mini | 0 | null | timm | 35,751 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for coat_lite_mini |
nates-test-org/cspdarknet53 | 8610503147e2c4e579df71887c688a6c51c5c0b9 | 2021-10-29T04:47:12.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | nates-test-org | null | nates-test-org/cspdarknet53 | 0 | null | timm | 35,752 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for cspdarknet53 |
natsuo/ja_hiragana | 0d9b88241ab41aeb8e4f687fc862a04a4a829e55 | 2021-07-13T07:59:27.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | natsuo | null | natsuo/ja_hiragana | 0 | null | transformers | 35,753 | Entry not found |
naughtycult/my-awesome-model | 026ff88e75afe1cc6af66b933a1fe95767d9083c | 2021-07-19T12:18:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | naughtycult | null | naughtycult/my-awesome-model | 0 | null | transformers | 35,754 | Entry not found |
navid-rekabsaz/advbert_ranker_l4 | d87b39af8b3e199f6707db96cf371768c83a54a5 | 2021-06-04T17:01:05.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | navid-rekabsaz | null | navid-rekabsaz/advbert_ranker_l4 | 0 | null | transformers | 35,755 | ## Welcome |
navjordj/gpt2_no | f3c46a34b68badfa6e1e60c82c89ebef02ca2c07 | 2021-11-17T20:09:17.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | navjordj | null | navjordj/gpt2_no | 0 | null | transformers | 35,756 | 3 epoch på norsk oscar corpus.
warmup_steps = 1000
learning_rate = 5e-3
block_size =512
per_device_train_batch_size = 64
cirka 1,5 time på TPU v3-8 per epoch |
nboost/pt-biobert-base-msmarco | c5d5eeac4da656703ea42ca9ae4871d71f5776b2 | 2021-05-20T01:27:20.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
] | null | false | nboost | null | nboost/pt-biobert-base-msmarco | 0 | null | transformers | 35,757 | Entry not found |
ncduy/bert-base-cased-finetuned-squad-test | ea8c5f629829e3a1c71dbedd42e2a4d7d399d548 | 2021-12-09T11:44:13.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ncduy | null | ncduy/bert-base-cased-finetuned-squad-test | 0 | null | transformers | 35,758 | Entry not found |
ncduy/gpt2-wikitext2 | eb272c815e8f6af45f5c64dcc6e597dce497c911 | 2021-08-06T14:27:43.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer"
] | text-generation | false | ncduy | null | ncduy/gpt2-wikitext2 | 0 | null | transformers | 35,759 | ---
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: gpt2-wikitext2
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.3114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7439 | 1.0 | 2249 | 6.6501 |
| 6.4023 | 2.0 | 4498 | 6.3852 |
| 6.2426 | 3.0 | 6747 | 6.3114 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
ncoop57/athena | e87ac647d1a7bb02e3ba0677b2713b1536b155e0 | 2022-01-04T19:24:11.000Z | [
"pytorch",
"longformer",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | ncoop57 | null | ncoop57/athena | 0 | null | sentence-transformers | 35,760 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# ncoop57/athena
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ncoop57/athena')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ncoop57/athena)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 50 with parameters:
```
{'batch_size': 2, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: LongformerModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ncoop57/codeparrot-py | 0ee25d2d8a639ef1a3919371a1dc84c2e04d8675 | 2022-01-26T00:54:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | ncoop57 | null | ncoop57/codeparrot-py | 0 | null | transformers | 35,761 | Entry not found |
ncoop57/codeparrot-test | 9cf10d70660adbfb399cee321a00241ae7d03f3d | 2021-12-17T20:46:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | ncoop57 | null | ncoop57/codeparrot-test | 0 | null | transformers | 35,762 | Entry not found |
ncoop57/multilingual-codesearch | a6806056bcbc54fc23c51d949ef74ee97a52fbc5 | 2021-04-03T03:06:55.000Z | [
"pytorch",
"transformers"
] | null | false | ncoop57 | null | ncoop57/multilingual-codesearch | 0 | null | transformers | 35,763 | Entry not found |
ncoop57/testmodel | 2e88b479f867f8730ec6d4ed7eb31a71c4e5c373 | 2021-06-07T00:41:20.000Z | [
"pytorch",
"transformers"
] | null | false | ncoop57 | null | ncoop57/testmodel | 0 | null | transformers | 35,764 | Entry not found |
ndevavarapu/utterance_gen | 48ecbb09983d38d453804c3c8c1ed58258be740b | 2021-06-23T13:11:00.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ndevavarapu | null | ndevavarapu/utterance_gen | 0 | null | transformers | 35,765 | Entry not found |
negfir/Bert2layer | 672490459661d2d669880ac334cc19aa203cc612 | 2021-12-15T13:53:26.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/Bert2layer | 0 | null | transformers | 35,766 | Entry not found |
negfir/Bertbase | fbd42f17c3472ac8c7deee949b55cf383c7ce2ee | 2021-12-14T20:57:53.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/Bertbase | 0 | null | transformers | 35,767 | Entry not found |
negfir/my-awesome-model | c3d6a24593c7427c1f8c7c572c40cd695815b488 | 2021-12-07T02:48:25.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/my-awesome-model | 0 | null | transformers | 35,768 | Entry not found |
neuralspace/indic-transformers-hi-distilbert | 294d253afa53724ccee4c894ee24bfbd32e936d9 | 2020-10-27T15:02:32.000Z | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | neuralspace | null | neuralspace/indic-transformers-hi-distilbert | 0 | 1 | transformers | 35,769 | Entry not found |
neurocode/Icelandic-NER-base | 447ed2be4570c70c99496de4e12a428bf5f261c5 | 2020-10-22T07:52:14.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | neurocode | null | neurocode/Icelandic-NER-base | 0 | null | transformers | 35,770 | Entry not found |
neurocode/Icelandic-NER-large | 77dfcdf42cec37211a2b582df19cb486e8905af3 | 2020-10-22T09:32:22.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | neurocode | null | neurocode/Icelandic-NER-large | 0 | null | transformers | 35,771 | Entry not found |
nfliu/roberta_books_wiki_bpe_32k | f1b2d98bdf8a047ed59f4605b74f2fa5e478905c | 2021-12-08T21:48:25.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nfliu | null | nfliu/roberta_books_wiki_bpe_32k | 0 | null | transformers | 35,772 | Entry not found |
nfliu/roberta_books_wiki_bpe_47k | d731e404efe7d11ae090fbeb3df729793f54ecab | 2021-12-08T21:52:31.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nfliu | null | nfliu/roberta_books_wiki_bpe_47k | 0 | null | transformers | 35,773 | Entry not found |
ngdiana/xlsr-2-bart | 7a3fd2a466aec73d3e3750337dcba6f08493040a | 2022-02-12T12:52:12.000Z | [
"pytorch",
"speech-encoder-decoder",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | ngdiana | null | ngdiana/xlsr-2-bart | 0 | null | transformers | 35,774 | Entry not found |
nhrony/bert-bn | b0ca6b3496230dd346b27b7fa26de867fdcba76b | 2022-01-16T17:36:43.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nhrony | null | nhrony/bert-bn | 0 | null | transformers | 35,775 | Entry not found |
niclas/model_en_2 | 53d97898d67438bb3c13d1a70208f7808a6176e5 | 2022-02-17T21:15:02.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | niclas | null | niclas/model_en_2 | 0 | null | transformers | 35,776 | Entry not found |
niclas/model_sv_5 | 69d5936002eb7f8003cde870fb2c9710b3035e69 | 2021-12-23T01:05:13.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | niclas | null | niclas/model_sv_5 | 0 | null | transformers | 35,777 | Entry not found |
niclas/models_sv_6 | bc90f8dd476d165a4d03a0c55a3fd676fc00ea94 | 2022-02-21T16:30:58.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | niclas | null | niclas/models_sv_6 | 0 | null | transformers | 35,778 | |
nielsr/deformable-detr | ef2269b4fa98a9cb738132da885dcb5f80e284c3 | 2022-02-01T13:29:07.000Z | [
"pytorch",
"deformable_detr",
"transformers"
] | null | false | nielsr | null | nielsr/deformable-detr | 0 | null | transformers | 35,779 | Entry not found |
nielsr/detr-testje | 6aeb8da036b70de4925edc23f4f1d8508b0c75f5 | 2021-04-28T06:42:48.000Z | [
"pytorch",
"detr",
"transformers"
] | null | false | nielsr | null | nielsr/detr-testje | 0 | null | transformers | 35,780 | Entry not found |
nielsr/tapex-large-finetuned-wtq | c418203c2388b7475d74012b6d98b7224955d2ef | 2022-01-17T09:56:43.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:wtq",
"arxiv:2107.07653",
"transformers",
"tapex",
"table-question-answering",
"license:apache-2.0",
"autotrain_compatible"
] | table-question-answering | false | nielsr | null | nielsr/tapex-large-finetuned-wtq | 0 | 2 | transformers | 35,781 | ---
language: en
tags:
- tapex
- table-question-answering
license: apache-2.0
datasets:
- wtq
inference: false
---
TAPEX-large model fine-tuned on WTQ. This model was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. Original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
To load it and run inference, you can do the following:
```
from transformers import BartTokenizer, BartForConditionalGeneration
import pandas as pd
tokenizer = BartTokenizer.from_pretrained("nielsr/tapex-large-finetuned-wtq")
model = BartForConditionalGeneration.from_pretrained("nielsr/tapex-large-finetuned-wtq")
# create table
data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], 'Number of movies': ["87", "53", "69"]}
table = pd.DataFrame.from_dict(data)
# turn into dict
table_dict = {"header": list(table.columns), "rows": [list(row.values) for i,row in table.iterrows()]}
# turn into format TAPEX expects
# define the linearizer based on this code: https://github.com/microsoft/Table-Pretraining/blob/main/tapex/processor/table_linearize.py
linearizer = IndexedRowTableLinearize()
linear_table = linearizer.process_table(table_dict)
# add question
question = "how many movies does George Clooney have?"
joint_input = question + " " + linear_table
# encode
encoding = tokenizer(joint_input, return_tensors="pt")
# forward pass
outputs = model.generate(**encoding)
# decode
tokenizer.batch_decode(outputs, skip_special_tokens=True)
``` |
nikcook/distilbert-base-uncased-finetuned-squad | da4190f9f867de446aa1aab81e77c9dd10bd5bdc | 2022-01-13T11:28:01.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | nikcook | null | nikcook/distilbert-base-uncased-finetuned-squad | 0 | null | transformers | 35,782 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2199 | 1.0 | 5533 | 1.1525 |
| 0.9463 | 2.0 | 11066 | 1.1298 |
| 0.7636 | 3.0 | 16599 | 1.1581 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
nikhilpatil2532000/DialoGPT-small-harrypotter | 684c2f56ca8e715d59b321b4fb6167dede8c3e3e | 2021-10-11T15:34:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | nikhilpatil2532000 | null | nikhilpatil2532000/DialoGPT-small-harrypotter | 0 | null | transformers | 35,783 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
nikitam/mbert-tlm-chat-en-zh | fe6f9ea87d337da10fc62c79678e567724ef1d00 | 2021-09-27T11:08:25.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nikitam | null | nikitam/mbert-tlm-chat-en-zh | 0 | null | transformers | 35,784 | Entry not found |
nikitam/mbert-xdm-en-zh | c1d56a829141712e7cfa9c73a9c7a65308c15570 | 2021-10-12T10:50:32.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nikitam | null | nikitam/mbert-xdm-en-zh | 0 | null | transformers | 35,785 | Entry not found |
nikolamilosevic/xlm-roberta-base-finetuned-panx-de | ea628988d681ef54e4b7c99428b1e219102d2aad | 2022-02-15T14:52:51.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | nikolamilosevic | null | nikolamilosevic/xlm-roberta-base-finetuned-panx-de | 0 | null | transformers | 35,786 | Entry not found |
nimelinia/rut5-reply-headline-model | b88438af97550fdf4eb3da9f79350f9c76e0f8d9 | 2022-01-24T12:31:54.000Z | [
"pytorch"
] | null | false | nimelinia | null | nimelinia/rut5-reply-headline-model | 0 | null | null | 35,787 | This model was trained from rut5-base-multitask with pair of questions and answers (in Russian).
The model demonstrate interesting behavior with option "reply" and "headline".
When model creates a headline for paragraph of text, it not only uses phrases from text, but also generate new words and sometimes new meanings.
Examples of questions and answers:
> Как зовут отца Александра Сергеевича Пушкина?
> - Пушкин
> Где купить вкусное мороженое?
> - В супермаркете
> Красивая ли Мона Лиза?
> - Очень красивая
Examples of headlines:
> Власти Пекина из-за пандемии COVID-19 призвали жителей города отказаться от помощи и избегать любого контакта с олимпийскими машинами, попавшими в ДТП. Об этом сообщает South China Morning Post.
> - Китайский губернатор призвал жителей Пекина отказаться от помощи
> Казахский народ должен поддержать своего президента Касым-Жомарт Токаева на фоне угрозы повторения массовых беспорядков, но и властям страны следует провести демократические реформы для снижения недовольства. Об этом в интервью изданию Orda заявил бывший генеральный продюсер гостелеканала «Хабар», экс-глава канала «Ел Арна» Серик Абас-Шах.
> - Казахский народ должен поддержать Токаева
> Позиция России по макроэкономическим показателям является лучшей в мире. Об этом сказал ТАСС российский исполнительный директор в Международном валютном фонде (МВФ) Алексей Можин.
> - Российская экономика является лучшей в мире |
nimrah/wav2vec2-large-xls-r-300m-hindi-colab | 6ab9b2f2bb1b75fdcacb35c3b1dc31a51a4edc1a | 2022-01-19T21:21:34.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | nimrah | null | nimrah/wav2vec2-large-xls-r-300m-hindi-colab | 0 | null | transformers | 35,788 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hindi-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
nimrah/wav2vec2-large-xls-r-300m-my_hindi_presentation-colab | 88e5b5d5fc5a3874a153e67c7847f27dd4678155 | 2022-02-20T11:04:36.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | nimrah | null | nimrah/wav2vec2-large-xls-r-300m-my_hindi_presentation-colab | 0 | null | transformers | 35,789 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-my_hindi_presentation-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-my_hindi_presentation-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
nimrazaheer/DialoGPT-small-harrypotter | 8e188bf9245965bd324c115bc6fc3e564991b0bf | 2022-02-07T16:08:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | nimrazaheer | null | nimrazaheer/DialoGPT-small-harrypotter | 0 | null | transformers | 35,790 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
nlpHakdang/roberta-large-NER | 1ea9fd970cb30d3d1da79631e101b7a1226cab2c | 2021-12-20T11:32:10.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | nlpHakdang | null | nlpHakdang/roberta-large-NER | 0 | null | transformers | 35,791 | Entry not found |
nlpunibo/distilbert_config1 | fb6504b687b3e528a6e69be4f05acf14ef7d88b4 | 2021-02-19T14:45:08.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | nlpunibo | null | nlpunibo/distilbert_config1 | 0 | null | transformers | 35,792 | Entry not found |
nntadotzip/xlnet-base-cased-IUChatbot-ontologyDts-localParams | c8e7b75720735e534141e5fff1457e8049380cb7 | 2022-01-24T08:29:47.000Z | [
"pytorch",
"tensorboard",
"xlnet",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | nntadotzip | null | nntadotzip/xlnet-base-cased-IUChatbot-ontologyDts-localParams | 0 | null | transformers | 35,793 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlnet-base-cased-IUChatbot-ontologyDts-localParams
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-IUChatbot-ontologyDts-localParams
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1172 | 1.0 | 1119 | 0.0657 |
| 0.0564 | 2.0 | 2238 | 0.0237 |
| 0.033 | 3.0 | 3357 | 0.0238 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
nntadotzip/xlnet-base-cased-IUChatbot-ontologyDts | cf0209052dd04ba385ebb2f260908c363ed3ada6 | 2022-01-20T17:12:19.000Z | [
"pytorch",
"tensorboard",
"xlnet",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | nntadotzip | null | nntadotzip/xlnet-base-cased-IUChatbot-ontologyDts | 0 | null | transformers | 35,794 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlnet-base-cased-IUChatbot-ontologyDts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-IUChatbot-ontologyDts
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 318 | 0.5005 |
| 0.8222 | 2.0 | 636 | 0.4488 |
| 0.8222 | 3.0 | 954 | 0.4965 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
nostalgebraist/clip-tumblr-vae | 8849154c3eb898c71070b021345c72bb74aec552 | 2021-08-12T21:51:28.000Z | [
"pytorch",
"clip",
"feature-extraction",
"transformers"
] | feature-extraction | false | nostalgebraist | null | nostalgebraist/clip-tumblr-vae | 0 | null | transformers | 35,795 | Entry not found |
not-tanh/wav2vec2-large-xlsr-53-vietnamese | 3423b2dc7e8e0f1e41078cdaf51a25ae592864fa | 2021-04-02T10:59:16.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"vi",
"dataset:common_voice",
"dataset:vivos",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | not-tanh | null | not-tanh/wav2vec2-large-xlsr-53-vietnamese | 0 | 2 | transformers | 35,796 | ---
language: vi
datasets:
- common_voice
- vivos
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Ted Vietnamese XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice vi
type: common_voice
args: vi
metrics:
- name: Test WER
type: wer
value: 39.571823
---
# Wav2Vec2-Large-XLSR-53-Vietnamese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Vietnamese using the [Common Voice](https://huggingface.co/datasets/common_voice), [Vivos dataset](https://ailab.hcmus.edu.vn/vivos) and [FOSD dataset](https://data.mendeley.com/datasets/k9sxg2twv4/4).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "vi", split="test")
processor = Wav2Vec2Processor.from_pretrained("not-tanh/wav2vec2-large-xlsr-53-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("not-tanh/wav2vec2-large-xlsr-53-vietnamese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Vietnamese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "vi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("not-tanh/wav2vec2-large-xlsr-53-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("not-tanh/wav2vec2-large-xlsr-53-vietnamese")
model.to("cuda")
chars_to_ignore_regex = r'[,?.!\-;:"“%\'�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 39.571823%
## Training
## TODO
The Common Voice `train`, `validation`, the VIVOS and FOSD datasets were used for training
The script used for training can be found ... # TODO |
not7even/DialoGPT-small-7evenpool | 1230d2467c4a1b1f10c2d6e86997eda7263b5e56 | 2021-11-30T17:17:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | not7even | null | not7even/DialoGPT-small-7evenpool | 0 | null | transformers | 35,797 | ---
tags:
- conversational
---
# 7evenpool DialoGPT Model |
ntp0102/wav2vec2-base-timit-demo-colab | 3a1d99375b2a7dd46d4af3da8951fa059deda337 | 2021-12-31T10:30:56.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | ntp0102 | null | ntp0102/wav2vec2-base-timit-demo-colab | 0 | null | transformers | 35,798 | pretrain |
nuod/wav2vec2 | 4a8b8474fbb58b654c46d942b63cfbdb3729b5fe | 2021-11-22T05:53:23.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"transformers"
] | null | false | nuod | null | nuod/wav2vec2 | 0 | null | transformers | 35,799 | Entry not found |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.