modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
sidkhuntia/harrypotter | 321c79e8b949529e11d31f8d55d7a1111081ca6c | 2021-11-03T07:37:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | sidkhuntia | null | sidkhuntia/harrypotter | 1 | null | transformers | 30,300 | ---
tags:
- conversational
---
#Harry Potter |
sienog/autonlp-mt5-xlsum-25085641 | 5d360a94fe2edc8f7be09875cfe866036632cb81 | 2021-10-22T17:20:30.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"unk",
"dataset:sienog/autonlp-data-mt5-xlsum",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | sienog | null | sienog/autonlp-mt5-xlsum-25085641 | 1 | null | transformers | 30,301 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- sienog/autonlp-data-mt5-xlsum
co2_eq_emissions: 11.166602089650883
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 25085641
- CO2 Emissions (in grams): 11.166602089650883
## Validation Metrics
- Loss: 1.173471212387085
- Rouge1: 51.7353
- Rouge2: 36.6771
- RougeL: 45.4129
- RougeLsum: 48.8512
- Gen Len: 82.9375
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/sienog/autonlp-mt5-xlsum-25085641
``` |
sifclairhelix/DialoGPT-small-harrypot | a4463f378dbf45ce3b5d908ac48ae0e1cc3730a1 | 2021-09-03T17:12:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | sifclairhelix | null | sifclairhelix/DialoGPT-small-harrypot | 1 | null | transformers | 30,302 | ---
tags:
- conversational
---
#Harry Potter DialoGPT Model |
simonmun/COHA1830s | 12bd719f619557089a59c861189ea686f79d333e | 2021-05-20T21:32:06.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simonmun | null | simonmun/COHA1830s | 1 | null | transformers | 30,303 | Entry not found |
simonmun/COHA1850s | 0e2a9adfec93a145cc7899ae5e26e2f2054b5dc5 | 2021-05-20T21:33:55.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simonmun | null | simonmun/COHA1850s | 1 | null | transformers | 30,304 | Entry not found |
simonmun/COHA1990s | 1eba6574b90547be0825626f3c41b39aedec3849 | 2021-05-20T21:48:59.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simonmun | null | simonmun/COHA1990s | 1 | null | transformers | 30,305 | Entry not found |
skillzzzzzy/bengberto | 85cb3cf37789e55a669384e6207785eae9ef9950 | 2021-11-14T13:10:48.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | skillzzzzzy | null | skillzzzzzy/bengberto | 1 | null | transformers | 30,306 | Entry not found |
skillzzzzzy/hindberto | 117d222ace4e3d38a49a28c39868a2b52251b2c7 | 2021-11-14T12:46:31.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | skillzzzzzy | null | skillzzzzzy/hindberto | 1 | null | transformers | 30,307 | Entry not found |
skillzzzzzy/tamilberto | 8d630a2b38e1b39fc37fa8902bb6250dfed36465 | 2021-11-14T13:20:33.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | skillzzzzzy | null | skillzzzzzy/tamilberto | 1 | null | transformers | 30,308 | Entry not found |
skimai/electra-small-spanish | 570aa7ca5f34392c7083ff7b28fec41f471c39e9 | 2020-05-08T19:16:48.000Z | [
"pytorch",
"transformers"
] | null | false | skimai | null | skimai/electra-small-spanish | 1 | null | transformers | 30,309 | Entry not found |
skylord/greek_lsr_1 | 915042637ab60382985f73db8286abbcb0eb9cf0 | 2021-03-26T05:37:48.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"el",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | skylord | null | skylord/greek_lsr_1 | 1 | null | transformers | 30,310 | ---
language: el
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Greek XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice el
type: common_voice
args: el
metrics:
- name: Test WER
type: wer
value: 56.253154
---
# Wav2Vec2-Large-XLSR-53-Greek
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Greek using the [Common Voice](https://huggingface.co/datasets/common_voice), ... and ... dataset{s}. #TODO: replace {language} with your language, *e.g.* French and eventually add more datasets that were used and eventually remove common voice if model was not trained on common voice
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "el", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("skylord/greek_lsr_1")
model = Wav2Vec2ForCTC.from_pretrained("skylord/greek_lsr_1")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Greek test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "el", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("skylord/greek_lsr_1")
model = Wav2Vec2ForCTC.from_pretrained("skylord/greek_lsr_1")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 56.253154 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training.
The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
|
smartpim/k2t_ru_02 | d82ad6fcc250c336bf35a2a5f4d75cb8f169b16d | 2022-02-13T17:49:03.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | smartpim | null | smartpim/k2t_ru_02 | 1 | null | transformers | 30,311 | Entry not found |
smeoni/deberta-base-clrp | 3c761d923d168ba37a69f8eeaf9301bec59e1324 | 2021-06-23T09:45:15.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | smeoni | null | smeoni/deberta-base-clrp | 1 | null | transformers | 30,312 | Entry not found |
smeoni/distilroberta-base-clrp | 40e9f755daa43abd29971440136512a279c38b37 | 2021-06-23T10:04:08.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | smeoni | null | smeoni/distilroberta-base-clrp | 1 | null | transformers | 30,313 | Entry not found |
smeoni/electra-base-discriminator-clrp | 98921a37d0c19a8fac4997d00811b721c2a08070 | 2021-06-23T10:11:13.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | smeoni | null | smeoni/electra-base-discriminator-clrp | 1 | null | transformers | 30,314 | Entry not found |
smeoni/roberta-base-clrp | d5c3f37d870d79e62e7a6418d0f251d6f837b304 | 2021-06-21T21:29:20.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | smeoni | null | smeoni/roberta-base-clrp | 1 | null | transformers | 30,315 | Entry not found |
smonah/distilbert-base-uncased-finetuned-squad | a7f386a61e464d7967ddc64146fc125c3e5f783a | 2021-12-20T15:43:44.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | smonah | null | smonah/distilbert-base-uncased-finetuned-squad | 1 | null | transformers | 30,316 | Entry not found |
softcatala/wav2vec2-large-100k-voxpopuli-catala | fb77f7231b0688f6e004a03012ebf59162bacd16 | 2022-02-08T02:20:32.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ca",
"dataset:common_voice",
"dataset:parlament_parla",
"transformers",
"audio",
"speech",
"speech-to-text",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | softcatala | null | softcatala/wav2vec2-large-100k-voxpopuli-catala | 1 | null | transformers | 30,317 | ---
language: ca
datasets:
- common_voice
- parlament_parla
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- speech-to-text
license: apache-2.0
model-index:
- name: Catalan VoxPopuli Wav2Vec2 Large
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
datasets:
- name: Common Voice ca
type: common_voice
args: ca
- name: ParlamentParla
url: https://www.openslr.org/59/
metrics:
- name: Test WER
type: wer
value: 5.98
- name: Google Crowsourced Corpus WER
type: wer
value: 12.14
- name: Audiobook “La llegenda de Sant Jordi” WER
type: wer
value: 12.02
---
# Wav2Vec2-Large-100k-VoxPopuli-Català
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) on Catalan language using the [Common Voice](https://huggingface.co/datasets/common_voice) and [ParlamentParla](https://www.openslr.org/59/) datasets.
**Attention:** The split train/dev/test used does not fully map with the CommonVoice 6.1 dataset. A custom split was used combining both the CommonVoice and ParlamentParla dataset and can be found [here](https://github.com/ccoreilly/wav2vec2-catala). Evaluating on the CV test dataset will produce a biased WER as 1144 audio files of that dataset were used in training/evaluation of this model.
WER was calculated using this [test.csv](https://github.com/ccoreilly/wav2vec2-catala/blob/master/test-filtered.csv) which was not seen by the model during training/evaluation.
You can find training and evaluation scripts in the github repository [ccoreilly/wav2vec2-catala](https://github.com/ccoreilly/wav2vec2-catala)
When using this model, make sure that your speech input is sampled at 16kHz.
## Results
Word error rate was evaluated on the following datasets unseen by the model:
| Dataset | WER |
| ------- | --- |
| [Test split CV+ParlamentParla]((https://github.com/ccoreilly/wav2vec2-catala/blob/master/test-filtered.csv)) | 5.98% |
| [Google Crowsourced Corpus](https://www.openslr.org/69/) | 12.14% |
| Audiobook “La llegenda de Sant Jordi” | 12.02% |
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ca", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("ccoreilly/wav2vec2-large-100k-voxpopuli-catala")
model = Wav2Vec2ForCTC.from_pretrained("ccoreilly/wav2vec2-large-100k-voxpopuli-catala")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
``` |
softcatala/wav2vec2-large-xlsr-catala | 4e8ceed125344298e04a2a5d9ce1645f7fc3d4b9 | 2022-02-08T00:23:02.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ca",
"dataset:common_voice",
"dataset:parlament_parla",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | softcatala | null | softcatala/wav2vec2-large-xlsr-catala | 1 | null | transformers | 30,318 | ---
language: ca
datasets:
- common_voice
- parlament_parla
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Catalan XLSR Wav2Vec2 Large
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
datasets:
- name: Common Voice ca
type: common_voice
args: ca
- name: ParlamentParla
url: https://www.openslr.org/59/
metrics:
- name: Test WER
type: wer
value: 6.92
- name: Google Crowsourced Corpus WER
type: wer
value: 12.99
- name: Audiobook “La llegenda de Sant Jordi” WER
type: wer
value: 13.23
---
# Wav2Vec2-Large-XLSR-Català
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Catalan language using the [Common Voice](https://huggingface.co/datasets/common_voice) and [ParlamentParla](https://www.openslr.org/59/) datasets.
**Attention:** The split train/dev/test used does not fully map with the CommonVoice 6.1 dataset. A custom split was used combining both the CommonVoice and ParlamentParla dataset and can be found [here](https://github.com/ccoreilly/wav2vec2-catala). Evaluating on the CV test dataset will produce a biased WER as 1144 audio files of that dataset were used in training/evaluation of this model.
WER was calculated using this [test.csv](https://github.com/ccoreilly/wav2vec2-catala/blob/master/test.csv) which was not seen by the model during training/evaluation.
You can find training and evaluation scripts in the github repository [ccoreilly/wav2vec2-catala](https://github.com/ccoreilly/wav2vec2-catala)
When using this model, make sure that your speech input is sampled at 16kHz.
## Results
Word error rate was evaluated on the following datasets unseen by the model:
| Dataset | WER |
| ------- | --- |
| [Test split CV+ParlamentParla]((https://github.com/ccoreilly/wav2vec2-catala/blob/master/test.csv)) | 6.92% |
| [Google Crowsourced Corpus](https://www.openslr.org/69/) | 12.99% |
| Audiobook “La llegenda de Sant Jordi” | 13.23% |
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ca", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("ccoreilly/wav2vec2-large-xlsr-catala")
model = Wav2Vec2ForCTC.from_pretrained("ccoreilly/wav2vec2-large-xlsr-catala")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
``` |
soheeyang/dpr-ctx_encoder-single-trivia-base | 59536a5f080a63f7572dd1a377ba74343ba2b5a7 | 2021-04-15T14:48:50.000Z | [
"pytorch",
"tf",
"dpr",
"arxiv:2004.04906",
"transformers"
] | null | false | soheeyang | null | soheeyang/dpr-ctx_encoder-single-trivia-base | 1 | null | transformers | 30,319 | # DPRContextEncoder for TriviaQA
## dpr-ctx_encoder-single-trivia-base
Dense Passage Retrieval (`DPR`)
Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih, [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906), EMNLP 2020.
This model is the context encoder of DPR trained solely on TriviaQA (single-trivia) using the [official implementation of DPR](https://github.com/facebookresearch/DPR).
Disclaimer: This model is not from the authors of DPR, but my reproduction. The authors did not release the DPR weights trained solely on TriviaQA. I hope this model checkpoint can be helpful for those who want to use DPR trained only on TriviaQA.
## Performance
The following is the answer recall rate measured using PyTorch 1.4.0 and transformers 4.5.0.
The values in parentheses are those reported in the paper.
| Top-K Passages | TriviaQA Dev | TriviaQA Test |
|----------------|--------------|---------------|
| 1 | 54.27 | 54.41 |
| 5 | 71.11 | 70.99 |
| 20 | 79.53 | 79.31 (79.4) |
| 50 | 82.72 | 82.99 |
| 100 | 85.07 | 84.99 (85.0) |
## How to Use
Using `AutoModel` does not properly detect whether the checkpoint is for `DPRContextEncoder` or `DPRQuestionEncoder`.
Therefore, please specify the exact class to use the model.
```python
from transformers import DPRContextEncoder, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("soheeyang/dpr-ctx_encoder-single-trivia-base")
ctx_encoder = DPRContextEncoder.from_pretrained("soheeyang/dpr-ctx_encoder-single-trivia-base")
data = tokenizer("context comes here", return_tensors="pt")
ctx_embedding = ctx_encoder(**data).pooler_output # embedding vector for context
```
|
soikit/distilgpt2-finetuned-wikitext2 | bdfe6fe3da5588252ad1229041ae882c2188feea | 2021-10-19T13:23:40.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | soikit | null | soikit/distilgpt2-finetuned-wikitext2 | 1 | null | transformers | 30,320 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7608 | 1.0 | 2334 | 3.6655 |
| 3.6335 | 2.0 | 4668 | 3.6455 |
| 3.6066 | 3.0 | 7002 | 3.6424 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
soroush/t5-finetuned-lesson-summarizer | 7c02073d028f85d137e7fcc044c120f18f1beb7f | 2020-07-26T23:56:22.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | soroush | null | soroush/t5-finetuned-lesson-summarizer | 1 | null | transformers | 30,321 | Entry not found |
sourabharsh/wav2vec2_rajya_sabha | d85272efdb2b5bff710192f1b861f05bd15eed9e | 2021-07-14T08:52:04.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | sourabharsh | null | sourabharsh/wav2vec2_rajya_sabha | 1 | null | transformers | 30,322 | Entry not found |
speeqo/wav2vec2-base-100h-with-lm | e44a4ad122a2cd7379c028268dceddbfd2f7d9fb | 2022-02-04T13:19:45.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | speeqo | null | speeqo/wav2vec2-base-100h-with-lm | 1 | null | transformers | 30,323 | Entry not found |
sravn/e2e-qg-scibert | fd310d5518f88058049a8769875ccf71e07e0e49 | 2021-07-09T17:18:08.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sravn | null | sravn/e2e-qg-scibert | 1 | null | transformers | 30,324 | Entry not found |
sravya/ELECTRA_SD_V4 | 568f62c010a6ad791a1a743a2f000fd014097bf3 | 2021-06-10T03:57:37.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | sravya | null | sravya/ELECTRA_SD_V4 | 1 | null | transformers | 30,325 | Entry not found |
sripadhstudy/100_SDB_TAxxL_average_768 | d56e5c76491e18165d712cee382bb387f8998a1f | 2021-06-05T15:37:11.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | sripadhstudy | null | sripadhstudy/100_SDB_TAxxL_average_768 | 1 | null | transformers | 30,326 | Entry not found |
sripadhstudy/500_SAB_TAxxL_truncate_3L_h2048_original | 946222972eb1c2abb818a82d2e7935cd7c1673a5 | 2021-06-21T02:14:44.000Z | [
"pytorch"
] | null | false | sripadhstudy | null | sripadhstudy/500_SAB_TAxxL_truncate_3L_h2048_original | 1 | null | null | 30,327 | Entry not found |
sripadhstudy/500_SAB_TAxxL_truncate_3_layers | 8071699f05c3cd2d4a9ca003e323c83567d6968f | 2021-06-14T15:51:21.000Z | [
"pytorch"
] | null | false | sripadhstudy | null | sripadhstudy/500_SAB_TAxxL_truncate_3_layers | 1 | null | null | 30,328 | Entry not found |
sripadhstudy/500_SAB_TAxxL_truncate_768 | 53c7ea2e22a385fd8bbd63f92eb65e852e96d1ab | 2021-06-10T14:18:20.000Z | [
"pytorch",
"albert",
"transformers"
] | null | false | sripadhstudy | null | sripadhstudy/500_SAB_TAxxL_truncate_768 | 1 | null | transformers | 30,329 | Entry not found |
sripadhstudy/500_SDB_TAxxL_truncate_768 | 1430e9156ea1b083e7e09dc1f33b2ac7203f177e | 2021-06-09T06:41:53.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | sripadhstudy | null | sripadhstudy/500_SDB_TAxxL_truncate_768 | 1 | null | transformers | 30,330 | Entry not found |
sripadhstudy/50_SDB_TAxxL_average_768 | 556e2c629347b6e5b47c5c3f20d94ba1459d49a4 | 2021-06-04T16:40:02.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | sripadhstudy | null | sripadhstudy/50_SDB_TAxxL_average_768 | 1 | null | transformers | 30,331 | Entry not found |
sripadhstudy/50_SDB_TAxxL_truncate_768 | 80884863a4ccc6e59874b94e19c21d1a020b7354 | 2021-06-04T14:04:52.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | sripadhstudy | null | sripadhstudy/50_SDB_TAxxL_truncate_768 | 1 | null | transformers | 30,332 | Entry not found |
ssardorf/t5-meta-desc | eb65651a5daf618d28d00fd87c99287c2ecaa573 | 2022-02-23T10:20:38.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ssardorf | null | ssardorf/t5-meta-desc | 1 | null | transformers | 30,333 | Entry not found |
sshasnain/wav2vec2-xls-r-300m-bangla-command | 7d4c690f2f489dc03800b205d2fbf783abfc87ff | 2022-02-11T13:10:44.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"Bengali",
"dataset:custom",
"transformers",
"bn",
"audio",
"speech",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sshasnain | null | sshasnain/wav2vec2-xls-r-300m-bangla-command | 1 | 1 | transformers | 30,334 | ---
language: Bengali
datasets:
- custom
metrics:
- wer
tags:
- bn
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
model-index:
- name: wav2vec2-xls-r-300m-bangla-command
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: custom
type: custom
args: ben
metrics:
- name: Test WER
type: wer
value: 0.006
---
# wav2vec2-xls-r-300m-bangla-command
***
## Usage
Commands
'৫ টা কলম দেন'
'চেয়ারটা কোথায় রেখেছেন'
'ডানের বালতিটার প্রাইজ কেমন'
'দশ কেজি আলু কত'
'বাজুসের ল্যাপটপটা এসেছে'
'বাসার জন্য দরজা আছে'
'ম্যাম মোবাইলটা কি আছে'
'হ্যালো শ্যাম্পুর দাম বল' |
sshasnain/wav2vec2-xls-r-timit-trainer | 84c7218d70ee6572afccf263cb00268e17aef785 | 2022-01-04T14:49:41.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sshasnain | null | sshasnain/wav2vec2-xls-r-timit-trainer | 1 | null | transformers | 30,335 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-timit-trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-timit-trainer
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1064
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5537 | 4.03 | 500 | 0.6078 | 1.0 |
| 0.5444 | 8.06 | 1000 | 0.4990 | 0.9994 |
| 0.3744 | 12.1 | 1500 | 0.5530 | 1.0 |
| 0.2863 | 16.13 | 2000 | 0.6401 | 1.0 |
| 0.2357 | 20.16 | 2500 | 0.6485 | 1.0 |
| 0.1933 | 24.19 | 3000 | 0.7448 | 0.9994 |
| 0.162 | 28.22 | 3500 | 0.7502 | 1.0 |
| 0.1325 | 32.26 | 4000 | 0.7801 | 1.0 |
| 0.1169 | 36.29 | 4500 | 0.8334 | 1.0 |
| 0.1031 | 40.32 | 5000 | 0.8269 | 1.0 |
| 0.0913 | 44.35 | 5500 | 0.8432 | 1.0 |
| 0.0793 | 48.39 | 6000 | 0.8738 | 1.0 |
| 0.0694 | 52.42 | 6500 | 0.8897 | 1.0 |
| 0.0613 | 56.45 | 7000 | 0.8966 | 1.0 |
| 0.0548 | 60.48 | 7500 | 0.9398 | 1.0 |
| 0.0444 | 64.51 | 8000 | 0.9548 | 1.0 |
| 0.0386 | 68.55 | 8500 | 0.9647 | 1.0 |
| 0.0359 | 72.58 | 9000 | 0.9901 | 1.0 |
| 0.0299 | 76.61 | 9500 | 1.0151 | 1.0 |
| 0.0259 | 80.64 | 10000 | 1.0526 | 1.0 |
| 0.022 | 84.67 | 10500 | 1.0754 | 1.0 |
| 0.0189 | 88.71 | 11000 | 1.0688 | 1.0 |
| 0.0161 | 92.74 | 11500 | 1.0914 | 1.0 |
| 0.0138 | 96.77 | 12000 | 1.1064 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
sshleifer/bb12 | cce97c9cc9c33f4e8f526adc24ea507a7ce273f0 | 2020-09-19T04:19:18.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/bb12 | 1 | null | transformers | 30,336 | Entry not found |
sshleifer/distill-mbart-en-ro-12-9 | 15f4fb7fd9c24278d59e7633890266a9b8b113bb | 2020-09-10T15:56:54.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/distill-mbart-en-ro-12-9 | 1 | null | transformers | 30,337 | Entry not found |
sshleifer/distill-pegasus-xsum-12-12 | 4dbb7c6ff132bd06e23cfa0f47b31903934af290 | 2020-10-14T16:12:31.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/distill-pegasus-xsum-12-12 | 1 | null | transformers | 30,338 | Entry not found |
sshleifer/student_blarge_12_3 | f978ee12042f257f34ccf74359c17ad78fb547c9 | 2021-06-14T08:27:56.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_blarge_12_3 | 1 | null | transformers | 30,339 | Entry not found |
sshleifer/student_cnn_6_6 | e4ba27bbc3fab4a008b6ea4227553b783fa73b99 | 2021-06-14T09:20:09.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_cnn_6_6 | 1 | null | transformers | 30,340 | Entry not found |
sshleifer/student_enro_avg_12_2 | cdd1b7dd2d3503d693ee17f83c674c21b95f343f | 2020-07-18T20:16:27.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_enro_avg_12_2 | 1 | null | transformers | 30,341 | Entry not found |
sshleifer/student_mbart_en_ro_12_2 | 94db2dac0fc4eec3c55b85332f6380dd272b71b2 | 2020-07-15T15:14:51.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_mbart_en_ro_12_2 | 1 | null | transformers | 30,342 | Entry not found |
sshleifer/student_mbart_en_ro_12_4 | f066f26f13622f2f2f3420bddf2ab95c345f4329 | 2020-07-15T15:14:51.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_mbart_en_ro_12_4 | 1 | null | transformers | 30,343 | Entry not found |
sshleifer/student_mbart_en_ro_12_9 | f26597750669e435fd9a347abab259fae8fd84c6 | 2020-07-15T15:26:58.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_mbart_en_ro_12_9 | 1 | null | transformers | 30,344 | Entry not found |
sshleifer/student_mbart_en_ro_6_6 | d7c7c98f7402df5605eb419cfb222ed88d2dc0b2 | 2020-07-15T15:27:55.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_mbart_en_ro_6_6 | 1 | null | transformers | 30,345 | Entry not found |
sshleifer/student_xsum_12_4 | 735c4736e93ef4933b118fe6cb7e57d88643224e | 2021-06-14T09:48:49.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_xsum_12_4 | 1 | null | transformers | 30,346 | Entry not found |
stanleychu2/blenderbot_user_simulator_both_domain | dd2d3b458dcc7942cf75775c0e6d7b68288d2538 | 2021-12-13T03:02:53.000Z | [
"pytorch",
"blenderbot-small",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | stanleychu2 | null | stanleychu2/blenderbot_user_simulator_both_domain | 1 | null | transformers | 30,347 | Entry not found |
stasvmk/honeymad_gpt_ru_v0_01 | bd4fa834eddcbd0c334e21b1a66fcb15da31d6a9 | 2022-01-10T07:41:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | stasvmk | null | stasvmk/honeymad_gpt_ru_v0_01 | 1 | null | transformers | 30,348 | Entry not found |
stefan-it/electra-base-gc4-64k-0-cased-generator | 65041ed72d818a6d48f95fa33de1d7e9f5b55cdc | 2021-04-30T22:25:17.000Z | [
"pytorch",
"tf",
"electra",
"fill-mask",
"de",
"dataset:german-nlp-group/german_common_crawl",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | stefan-it | null | stefan-it/electra-base-gc4-64k-0-cased-generator | 1 | null | transformers | 30,349 | ---
language: de
license: mit
datasets:
- german-nlp-group/german_common_crawl
widget:
- text: "Heute ist ein [MASK] Tag"
---
# GC4LM: A Colossal (Biased) language model for German
This repository presents a colossal (and biased) language model for German trained on the recently released
["German colossal, clean Common Crawl corpus"](https://german-nlp-group.github.io/projects/gc4-corpus.html) (GC4),
with a total dataset size of ~844GB.
---
**Disclaimer**: the presented and trained language models in this repository are for **research only** purposes.
The GC4 corpus - that was used for training - contains crawled texts from the internet. Thus, the language models can
be considered as highly biased, resulting in a model that encodes stereotypical associations along gender, race,
ethnicity and disability status. Before using and working with the released checkpoints, it is highly recommended
to read:
[On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?](https://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf)
from Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell.
The aim of the released checkpoints is to boost research on large pre-trained language models for German, especially
for identifying biases and how to prevent them, as most research is currently done only for English.
---
Please use the new GitHub Discussions feature in order to discuss or present further research questions.
Feel free to use `#gc4lm` on Twitter 🐦.
|
stefan-it/electra-base-gc4-64k-100000-cased-discriminator | 64e25b530b14ac2ad49096ef8ddbddd31dca3f6b | 2021-04-30T22:33:21.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"de",
"dataset:german-nlp-group/german_common_crawl",
"transformers",
"license:mit"
] | null | false | stefan-it | null | stefan-it/electra-base-gc4-64k-100000-cased-discriminator | 1 | null | transformers | 30,350 | ---
language: de
license: mit
datasets:
- german-nlp-group/german_common_crawl
---
# GC4LM: A Colossal (Biased) language model for German
This repository presents a colossal (and biased) language model for German trained on the recently released
["German colossal, clean Common Crawl corpus"](https://german-nlp-group.github.io/projects/gc4-corpus.html) (GC4),
with a total dataset size of ~844GB.
---
**Disclaimer**: the presented and trained language models in this repository are for **research only** purposes.
The GC4 corpus - that was used for training - contains crawled texts from the internet. Thus, the language models can
be considered as highly biased, resulting in a model that encodes stereotypical associations along gender, race,
ethnicity and disability status. Before using and working with the released checkpoints, it is highly recommended
to read:
[On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?](https://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf)
from Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell.
The aim of the released checkpoints is to boost research on large pre-trained language models for German, especially
for identifying biases and how to prevent them, as most research is currently done only for English.
---
Please use the new GitHub Discussions feature in order to discuss or present further research questions.
Feel free to use `#gc4lm` on Twitter 🐦.
|
stefan-it/electra-base-gc4-64k-100000-cased-generator | 7ff661bb959f8602514f6082d9ae340c15b5c9e1 | 2021-05-01T11:16:57.000Z | [
"pytorch",
"tf",
"electra",
"fill-mask",
"de",
"dataset:german-nlp-group/german_common_crawl",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | stefan-it | null | stefan-it/electra-base-gc4-64k-100000-cased-generator | 1 | null | transformers | 30,351 | ---
language: de
license: mit
datasets:
- german-nlp-group/german_common_crawl
widget:
- text: "Heute ist ein [MASK] Tag"
---
# GC4LM: A Colossal (Biased) language model for German
This repository presents a colossal (and biased) language model for German trained on the recently released
["German colossal, clean Common Crawl corpus"](https://german-nlp-group.github.io/projects/gc4-corpus.html) (GC4),
with a total dataset size of ~844GB.
---
**Disclaimer**: the presented and trained language models in this repository are for **research only** purposes.
The GC4 corpus - that was used for training - contains crawled texts from the internet. Thus, the language models can
be considered as highly biased, resulting in a model that encodes stereotypical associations along gender, race,
ethnicity and disability status. Before using and working with the released checkpoints, it is highly recommended
to read:
[On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?](https://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf)
from Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell.
The aim of the released checkpoints is to boost research on large pre-trained language models for German, especially
for identifying biases and how to prevent them, as most research is currently done only for English.
---
Please use the new GitHub Discussions feature in order to discuss or present further research questions.
Feel free to use `#gc4lm` on Twitter 🐦.
|
stefan-it/electra-base-gc4-64k-1000000-cased-discriminator | b278041fa3a92926a1a2b1615dbfae3ed0d820b9 | 2021-05-01T11:13:39.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"de",
"dataset:german-nlp-group/german_common_crawl",
"transformers",
"license:mit"
] | null | false | stefan-it | null | stefan-it/electra-base-gc4-64k-1000000-cased-discriminator | 1 | null | transformers | 30,352 | ---
language: de
license: mit
datasets:
- german-nlp-group/german_common_crawl
---
# GC4LM: A Colossal (Biased) language model for German
This repository presents a colossal (and biased) language model for German trained on the recently released
["German colossal, clean Common Crawl corpus"](https://german-nlp-group.github.io/projects/gc4-corpus.html) (GC4),
with a total dataset size of ~844GB.
---
**Disclaimer**: the presented and trained language models in this repository are for **research only** purposes.
The GC4 corpus - that was used for training - contains crawled texts from the internet. Thus, the language models can
be considered as highly biased, resulting in a model that encodes stereotypical associations along gender, race,
ethnicity and disability status. Before using and working with the released checkpoints, it is highly recommended
to read:
[On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?](https://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf)
from Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell.
The aim of the released checkpoints is to boost research on large pre-trained language models for German, especially
for identifying biases and how to prevent them, as most research is currently done only for English.
---
Please use the new GitHub Discussions feature in order to discuss or present further research questions.
Feel free to use `#gc4lm` on Twitter 🐦.
|
stefan-it/electra-base-gc4-64k-200000-cased-generator | b3ba2685533ecf3b54487b91de619a0dabba4247 | 2021-05-01T11:17:26.000Z | [
"pytorch",
"tf",
"electra",
"fill-mask",
"de",
"dataset:german-nlp-group/german_common_crawl",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | stefan-it | null | stefan-it/electra-base-gc4-64k-200000-cased-generator | 1 | null | transformers | 30,353 | ---
language: de
license: mit
datasets:
- german-nlp-group/german_common_crawl
widget:
- text: "Heute ist ein [MASK] Tag"
---
# GC4LM: A Colossal (Biased) language model for German
This repository presents a colossal (and biased) language model for German trained on the recently released
["German colossal, clean Common Crawl corpus"](https://german-nlp-group.github.io/projects/gc4-corpus.html) (GC4),
with a total dataset size of ~844GB.
---
**Disclaimer**: the presented and trained language models in this repository are for **research only** purposes.
The GC4 corpus - that was used for training - contains crawled texts from the internet. Thus, the language models can
be considered as highly biased, resulting in a model that encodes stereotypical associations along gender, race,
ethnicity and disability status. Before using and working with the released checkpoints, it is highly recommended
to read:
[On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?](https://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf)
from Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell.
The aim of the released checkpoints is to boost research on large pre-trained language models for German, especially
for identifying biases and how to prevent them, as most research is currently done only for English.
---
Please use the new GitHub Discussions feature in order to discuss or present further research questions.
Feel free to use `#gc4lm` on Twitter 🐦.
|
stefan-it/electra-base-gc4-64k-300000-cased-discriminator | 3faca80e0f2f6030986f69c8bdf4e7cd893d1236 | 2021-04-30T22:38:04.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"de",
"dataset:german-nlp-group/german_common_crawl",
"transformers",
"license:mit"
] | null | false | stefan-it | null | stefan-it/electra-base-gc4-64k-300000-cased-discriminator | 1 | null | transformers | 30,354 | ---
language: de
license: mit
datasets:
- german-nlp-group/german_common_crawl
---
# GC4LM: A Colossal (Biased) language model for German
This repository presents a colossal (and biased) language model for German trained on the recently released
["German colossal, clean Common Crawl corpus"](https://german-nlp-group.github.io/projects/gc4-corpus.html) (GC4),
with a total dataset size of ~844GB.
---
**Disclaimer**: the presented and trained language models in this repository are for **research only** purposes.
The GC4 corpus - that was used for training - contains crawled texts from the internet. Thus, the language models can
be considered as highly biased, resulting in a model that encodes stereotypical associations along gender, race,
ethnicity and disability status. Before using and working with the released checkpoints, it is highly recommended
to read:
[On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?](https://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf)
from Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell.
The aim of the released checkpoints is to boost research on large pre-trained language models for German, especially
for identifying biases and how to prevent them, as most research is currently done only for English.
---
Please use the new GitHub Discussions feature in order to discuss or present further research questions.
Feel free to use `#gc4lm` on Twitter 🐦.
|
stefan-it/electra-base-gc4-64k-400000-cased-generator | 101c8a47e9fc7ee9352fb1840ace7fd2652bbb0d | 2021-05-01T11:19:45.000Z | [
"pytorch",
"tf",
"electra",
"fill-mask",
"de",
"dataset:german-nlp-group/german_common_crawl",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | stefan-it | null | stefan-it/electra-base-gc4-64k-400000-cased-generator | 1 | null | transformers | 30,355 | ---
language: de
license: mit
datasets:
- german-nlp-group/german_common_crawl
widget:
- text: "Heute ist ein [MASK] Tag"
---
# GC4LM: A Colossal (Biased) language model for German
This repository presents a colossal (and biased) language model for German trained on the recently released
["German colossal, clean Common Crawl corpus"](https://german-nlp-group.github.io/projects/gc4-corpus.html) (GC4),
with a total dataset size of ~844GB.
---
**Disclaimer**: the presented and trained language models in this repository are for **research only** purposes.
The GC4 corpus - that was used for training - contains crawled texts from the internet. Thus, the language models can
be considered as highly biased, resulting in a model that encodes stereotypical associations along gender, race,
ethnicity and disability status. Before using and working with the released checkpoints, it is highly recommended
to read:
[On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?](https://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf)
from Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell.
The aim of the released checkpoints is to boost research on large pre-trained language models for German, especially
for identifying biases and how to prevent them, as most research is currently done only for English.
---
Please use the new GitHub Discussions feature in order to discuss or present further research questions.
Feel free to use `#gc4lm` on Twitter 🐦.
|
stefan-it/electra-base-gc4-64k-600000-cased-generator | 86bdcbde93aad61014a15ee6a494110f13136fce | 2021-05-01T11:21:31.000Z | [
"pytorch",
"tf",
"electra",
"fill-mask",
"de",
"dataset:german-nlp-group/german_common_crawl",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | stefan-it | null | stefan-it/electra-base-gc4-64k-600000-cased-generator | 1 | null | transformers | 30,356 | ---
language: de
license: mit
datasets:
- german-nlp-group/german_common_crawl
widget:
- text: "Heute ist ein [MASK] Tag"
---
# GC4LM: A Colossal (Biased) language model for German
This repository presents a colossal (and biased) language model for German trained on the recently released
["German colossal, clean Common Crawl corpus"](https://german-nlp-group.github.io/projects/gc4-corpus.html) (GC4),
with a total dataset size of ~844GB.
---
**Disclaimer**: the presented and trained language models in this repository are for **research only** purposes.
The GC4 corpus - that was used for training - contains crawled texts from the internet. Thus, the language models can
be considered as highly biased, resulting in a model that encodes stereotypical associations along gender, race,
ethnicity and disability status. Before using and working with the released checkpoints, it is highly recommended
to read:
[On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?](https://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf)
from Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell.
The aim of the released checkpoints is to boost research on large pre-trained language models for German, especially
for identifying biases and how to prevent them, as most research is currently done only for English.
---
Please use the new GitHub Discussions feature in order to discuss or present further research questions.
Feel free to use `#gc4lm` on Twitter 🐦.
|
stefan-it/electra-base-gc4-64k-800000-cased-generator | 7c5f973f8c832d7619ccdd5cf014c8e6ad659d91 | 2021-05-01T11:23:30.000Z | [
"pytorch",
"tf",
"electra",
"fill-mask",
"de",
"dataset:german-nlp-group/german_common_crawl",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | stefan-it | null | stefan-it/electra-base-gc4-64k-800000-cased-generator | 1 | null | transformers | 30,357 | ---
language: de
license: mit
datasets:
- german-nlp-group/german_common_crawl
widget:
- text: "Heute ist ein [MASK] Tag"
---
# GC4LM: A Colossal (Biased) language model for German
This repository presents a colossal (and biased) language model for German trained on the recently released
["German colossal, clean Common Crawl corpus"](https://german-nlp-group.github.io/projects/gc4-corpus.html) (GC4),
with a total dataset size of ~844GB.
---
**Disclaimer**: the presented and trained language models in this repository are for **research only** purposes.
The GC4 corpus - that was used for training - contains crawled texts from the internet. Thus, the language models can
be considered as highly biased, resulting in a model that encodes stereotypical associations along gender, race,
ethnicity and disability status. Before using and working with the released checkpoints, it is highly recommended
to read:
[On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?](https://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf)
from Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell.
The aim of the released checkpoints is to boost research on large pre-trained language models for German, especially
for identifying biases and how to prevent them, as most research is currently done only for English.
---
Please use the new GitHub Discussions feature in order to discuss or present further research questions.
Feel free to use `#gc4lm` on Twitter 🐦.
|
subbareddyiiit/roberta_csl_gold8k | 9f95fabc28733076106616113b184015f0c41c94 | 2021-05-20T22:01:14.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | subbareddyiiit | null | subbareddyiiit/roberta_csl_gold8k | 1 | null | transformers | 30,358 | hello
|
subham92/translation_model_by_subham | d8815c1e69f8d512e942ad978c4529a1def80c80 | 2021-01-18T10:29:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | subham92 | null | subham92/translation_model_by_subham | 1 | null | transformers | 30,359 | ---
language:
- fi
- en
tags:
- translation
license: apache-2.0
---
|
suksun1412/wangchanberta-ner-2 | c0efd489881e8fb4432ed1b21885d42364e176c7 | 2022-02-15T04:18:16.000Z | [
"pytorch",
"camembert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | suksun1412 | null | suksun1412/wangchanberta-ner-2 | 1 | null | transformers | 30,360 | Entry not found |
sultan/ArabicTransformer-large | 3ec88c13ec3f6a530fe7d707ee98ed1b61015c2c | 2021-12-05T17:06:51.000Z | [
"pytorch",
"funnel",
"feature-extraction",
"arxiv:2006.03236",
"transformers"
] | feature-extraction | false | sultan | null | sultan/ArabicTransformer-large | 1 | 1 | transformers | 30,361 | ArabicTransformer Large model (B8-8-8 with decoder)
<b>Paper</b> : ArabicTransformer: Efficient Large Arabic Language Model with Funnel Transformer and ELECTRA Objective (EMNLP21)
<b>Abstract</b>
Pre-training Transformer-based models such as BERT and ELECTRA on a collection of Arabic corpora, demonstrated by both AraBERT and AraELECTRA, shows an impressive result on downstream tasks. However, pre-training Transformer-based language models is computationally expensive, especially for large-scale models. Recently, Funnel Transformer has addressed the sequential redundancy inside Transformer architecture by compressing the sequence of hidden states, leading to a significant reduction in the pretraining cost. This paper empirically studies the performance and efficiency of building an Arabic language model with Funnel Transformer and ELECTRA objective. We find that our model achieves state-of-the-art results on several Arabic downstream tasks despite using less computational resources compared to other BERT-based models.
<b>Description</b>
This model was pre-trained on 44GB of Arabic corpora using [Funnel Transformer with ELECTRA objective](https://arxiv.org/abs/2006.03236). We will update you with more details about the model and our accepted paper later at EMNLP21. Check our GitHub page for the latest updates and examples: https://github.com/salrowili/ArabicTransformer
```bibtex
@inproceedings{alrowili-shanker-2021-arabictransformer-efficient,
title = "{A}rabic{T}ransformer: Efficient Large {A}rabic Language Model with Funnel Transformer and {ELECTRA} Objective",
author = "Alrowili, Sultan and
Shanker, Vijay",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.108",
pages = "1255--1261",
abstract = "Pre-training Transformer-based models such as BERT and ELECTRA on a collection of Arabic corpora, demonstrated by both AraBERT and AraELECTRA, shows an impressive result on downstream tasks. However, pre-training Transformer-based language models is computationally expensive, especially for large-scale models. Recently, Funnel Transformer has addressed the sequential redundancy inside Transformer architecture by compressing the sequence of hidden states, leading to a significant reduction in the pre-training cost. This paper empirically studies the performance and efficiency of building an Arabic language model with Funnel Transformer and ELECTRA objective. We find that our model achieves state-of-the-art results on several Arabic downstream tasks despite using less computational resources compared to other BERT-based models.",
}
``` |
sultan/ArabicTransformer-small-encoder | e3c60548d1c4c48b8e0c00307ba6239c8a6a32e9 | 2021-10-08T06:25:01.000Z | [
"pytorch",
"funnel",
"feature-extraction",
"transformers"
] | feature-extraction | false | sultan | null | sultan/ArabicTransformer-small-encoder | 1 | null | transformers | 30,362 | Entry not found |
sultan/ArabicTransformer-small | 1c91581e016d56e6130db642369bcebbf9e15774 | 2021-12-05T17:07:06.000Z | [
"pytorch",
"funnel",
"feature-extraction",
"arxiv:2006.03236",
"transformers"
] | feature-extraction | false | sultan | null | sultan/ArabicTransformer-small | 1 | null | transformers | 30,363 | ArabicTransformer small model (B4-4-4 with decoder)
<b>Paper</b> : ArabicTransformer: Efficient Large Arabic Language Model with Funnel Transformer and ELECTRA Objective (EMNLP21)
<b>Abstract</b>
Pre-training Transformer-based models such as BERT and ELECTRA on a collection of Arabic corpora, demonstrated by both AraBERT and AraELECTRA, shows an impressive result on downstream tasks. However, pre-training Transformer-based language models is computationally expensive, especially for large-scale models. Recently, Funnel Transformer has addressed the sequential redundancy inside Transformer architecture by compressing the sequence of hidden states, leading to a significant reduction in the pretraining cost. This paper empirically studies the performance and efficiency of building an Arabic language model with Funnel Transformer and ELECTRA objective. We find that our model achieves state-of-the-art results on several Arabic downstream tasks despite using less computational resources compared to other BERT-based models.
<b>Description</b>
This model was pre-trained on 44GB of Arabic corpora using [Funnel Transformer with ELECTRA objective](https://arxiv.org/abs/2006.03236). This model is faster than ELECTRA-base architecture while having the same number of parameters. The model was pre-trained with significantly less resources than state-of-the-art models. We will update you with more details about the model and our accepted paper later at EMNLP21.
Check our GitHub page for the latest updates and examples : https://github.com/salrowili/ArabicTransformer
```bibtex
@inproceedings{alrowili-shanker-2021-arabictransformer-efficient,
title = "{A}rabic{T}ransformer: Efficient Large {A}rabic Language Model with Funnel Transformer and {ELECTRA} Objective",
author = "Alrowili, Sultan and
Shanker, Vijay",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.108",
pages = "1255--1261",
abstract = "Pre-training Transformer-based models such as BERT and ELECTRA on a collection of Arabic corpora, demonstrated by both AraBERT and AraELECTRA, shows an impressive result on downstream tasks. However, pre-training Transformer-based language models is computationally expensive, especially for large-scale models. Recently, Funnel Transformer has addressed the sequential redundancy inside Transformer architecture by compressing the sequence of hidden states, leading to a significant reduction in the pre-training cost. This paper empirically studies the performance and efficiency of building an Arabic language model with Funnel Transformer and ELECTRA objective. We find that our model achieves state-of-the-art results on several Arabic downstream tasks despite using less computational resources compared to other BERT-based models.",
}
``` |
sunhao666/chi-sum | 82302bbf76608ec11443ca607a7de83d860073f2 | 2021-05-19T17:32:16.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | sunhao666 | null | sunhao666/chi-sum | 1 | null | transformers | 30,364 | |
sunitha/FT_AQG_Configs | 3310b7e725e58308e349186a3102294af0006b8b | 2022-02-09T13:04:55.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | sunitha | null | sunitha/FT_AQG_Configs | 1 | null | transformers | 30,365 | Entry not found |
sunitha/distilbert-base-uncased-3feb-2022-finetuned-squad | 490a38cc542d79832c605c589292becadbc87bbc | 2022-02-03T05:06:27.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | sunitha | null | sunitha/distilbert-base-uncased-3feb-2022-finetuned-squad | 1 | null | transformers | 30,366 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-3feb-2022-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-3feb-2022-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2276 | 1.0 | 5533 | 1.1641 |
| 0.9614 | 2.0 | 11066 | 1.1225 |
| 0.7769 | 3.0 | 16599 | 1.1470 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sunitha/output_files | 55bd18518478d2464c792bd4bbc0bf2ec99a3958 | 2021-12-13T13:57:03.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | sunitha | null | sunitha/output_files | 1 | null | transformers | 30,367 | Question Answering - Build - 1 |
suojianhua/itcast-nlp-base | 9e7d2ac06ffc293e161154193c0b41721327baaa | 2022-02-14T07:00:55.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | suojianhua | null | suojianhua/itcast-nlp-base | 1 | null | transformers | 30,368 | Entry not found |
sv/gpt2-finetuned-nft-shakes-seuss | 973c2c01f4bc74e349b6b7c76b1ef6e9301cfbe5 | 2021-09-06T19:35:40.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | sv | null | sv/gpt2-finetuned-nft-shakes-seuss | 1 | null | transformers | 30,369 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: gpt2-finetuned-nft-shakes-seuss
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-nft-shakes-seuss
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.2178 | 1.0 | 1095 | 4.0073 |
| 3.9522 | 2.0 | 2190 | 3.8824 |
| 3.8393 | 3.0 | 3285 | 3.8505 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
sv/gpt2-nft-poetry | 2b40c797ba2d0ebe7babc3759f6ca7caf8516b7a | 2021-09-08T16:15:47.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | sv | null | sv/gpt2-nft-poetry | 1 | null | transformers | 30,370 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: gpt2-nft-poetry
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-nft-poetry
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 282 | 4.3092 |
| 4.5403 | 2.0 | 564 | 4.1283 |
| 4.5403 | 3.0 | 846 | 4.0605 |
| 4.039 | 4.0 | 1128 | 4.0321 |
| 4.039 | 5.0 | 1410 | 4.0243 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
svsokol/opus-mt-ru-en-finetuned-en-to-ru | 446f5053100008550f7f264c0edad97b98637978 | 2021-12-14T19:53:09.000Z | [
"pytorch",
"marian",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | svsokol | null | svsokol/opus-mt-ru-en-finetuned-en-to-ru | 1 | null | transformers | 30,371 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: opus-mt-ru-en-finetuned-en-to-ru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ru-en-finetuned-en-to-ru
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ru-en](https://huggingface.co/Helsinki-NLP/opus-mt-ru-en) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
swcrazyfan/TE-v3-10K | f56d2d6e352ebf295a1abe565ba777992b2f4675 | 2021-05-29T03:21:08.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | swcrazyfan | null | swcrazyfan/TE-v3-10K | 1 | null | transformers | 30,372 | Entry not found |
swcrazyfan/TE-v3-12K | 6cb40bf0dd6c10c3073cad3eafe6bdcebf409a7a | 2021-05-29T06:32:52.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | swcrazyfan | null | swcrazyfan/TE-v3-12K | 1 | null | transformers | 30,373 | Entry not found |
swcrazyfan/TEFL-V3 | 285d25f7df525ac55768584249951766796453e5 | 2021-06-14T07:17:34.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | swcrazyfan | null | swcrazyfan/TEFL-V3 | 1 | null | transformers | 30,374 | Entry not found |
tabo/distilbert-base-uncased-finetuned-squad2 | 8fa621bd48b3898f27b71b06baa9cad24c1cd76f | 2021-12-17T07:22:42.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | tabo | null | tabo/distilbert-base-uncased-finetuned-squad2 | 1 | null | transformers | 30,375 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2306 | 1.0 | 5533 | 1.1557 |
| 0.9535 | 2.0 | 11066 | 1.1260 |
| 0.7629 | 3.0 | 16599 | 1.1606 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
taesu/ts-test | 1470fbe0f3b39f9281d0ee7eb3a622662cf37a7c | 2022-02-16T23:28:22.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | taesu | null | taesu/ts-test | 1 | null | transformers | 30,376 | Entry not found |
tareknaous/bert2bert-daily-dialog | 701d14e39314dc8c5fb170b5a0603b047aed4e72 | 2022-02-21T08:39:32.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tareknaous | null | tareknaous/bert2bert-daily-dialog | 1 | null | transformers | 30,377 | Entry not found |
tareknaous/t5-daily-dialog | 62cf98068835eb390be00a04a6b4d662e1eff762 | 2022-02-21T08:50:45.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tareknaous | null | tareknaous/t5-daily-dialog | 1 | null | transformers | 30,378 | Entry not found |
tarikul/distilbert-base-uncased-finetuned-squad | 75c71643c1a690ed6d97fd444e87291ecc68ba6a | 2021-09-12T07:19:38.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | tarikul | null | tarikul/distilbert-base-uncased-finetuned-squad | 1 | null | transformers | 30,379 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results:
- task:
name: Question Answering
type: question-answering
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
tbochens/dummy-model | ae9ea962bf672f6a04b5dea85ceec2942d768bcb | 2021-12-29T19:36:22.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | tbochens | null | tbochens/dummy-model | 1 | null | transformers | 30,380 | Entry not found |
tdopierre/ProtAugment-LM-Liu | e864181fdb7719aace4c3e14d618616c2864a371 | 2021-07-01T13:54:53.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | tdopierre | null | tdopierre/ProtAugment-LM-Liu | 1 | null | transformers | 30,381 | Entry not found |
teacookies/autonlp-more_fine_tune_24465520-26265903 | 28e9550d677a9f3f62fb466099e7165a99c0c8e5 | 2021-10-25T09:35:40.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-more_fine_tune_24465520-26265903 | 1 | null | transformers | 30,382 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 108.13983395548236
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265903
- CO2 Emissions (in grams): 108.13983395548236
## Validation Metrics
- Loss: 0.6330059170722961
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265903
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265903", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265903", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-more_fine_tune_24465520-26265904 | f8c7160137a61b7af3ba48187659166e6ada88d7 | 2021-10-25T09:36:11.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-more_fine_tune_24465520-26265904 | 1 | null | transformers | 30,383 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 108.63800043275934
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265904
- CO2 Emissions (in grams): 108.63800043275934
## Validation Metrics
- Loss: 0.5807144045829773
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265904
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265904", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265904", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-more_fine_tune_24465520-26265909 | fddd8d19f3995d35730f721062f17c3eaead4474 | 2021-10-25T09:20:12.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-more_fine_tune_24465520-26265909 | 1 | null | transformers | 30,384 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 80.25874179679201
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265909
- CO2 Emissions (in grams): 80.25874179679201
## Validation Metrics
- Loss: 5.950643062591553
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265909
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265909", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265909", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-roberta-base-squad2-24465519 | 72e0ec888dea16171a4f663c777f6dc8312ebcfd | 2021-10-22T08:13:26.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-roberta-base-squad2-24465519 | 1 | null | transformers | 30,385 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 58.19097299648645
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465519
- CO2 Emissions (in grams): 58.19097299648645
## Validation Metrics
- Loss: 0.566668689250946
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465519
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465519", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465519", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-roberta-base-squad2-24465521 | ecdf74d16363393bdec78fe6277da0041cf98925 | 2021-10-22T08:21:40.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-roberta-base-squad2-24465521 | 1 | null | transformers | 30,386 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 70.20260764805424
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465521
- CO2 Emissions (in grams): 70.20260764805424
## Validation Metrics
- Loss: 0.6295848488807678
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465521
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465521", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465521", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-roberta-base-squad2-24465523 | 66696373eb47a7af78181b4b223939624ad3d329 | 2021-10-22T08:13:18.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-roberta-base-squad2-24465523 | 1 | null | transformers | 30,387 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 56.99866929988893
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465523
- CO2 Emissions (in grams): 56.99866929988893
## Validation Metrics
- Loss: 0.5468788146972656
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465523
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465523", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465523", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-roberta-base-squad2-24465525 | b2702c3d4d0472404acd7c889618889b932805ff | 2021-10-22T08:23:09.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-roberta-base-squad2-24465525 | 1 | null | transformers | 30,388 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 63.997230261104875
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465525
- CO2 Emissions (in grams): 63.997230261104875
## Validation Metrics
- Loss: 0.5740988850593567
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465525
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465525", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465525", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
terter/rick-bot-test-v2 | ce3375e111b48987bd1059993e16c699639979c7 | 2021-09-13T15:16:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | terter | null | terter/rick-bot-test-v2 | 1 | null | transformers | 30,389 | ---
tags:
- conversational
---
#Rick Sanchez DialoGPT Model |
testimonial/wav2vec2-base-timit-demo-colab | 70f9b4c3b6484ddfbdb5949c6041c870d85c428a | 2022-02-03T03:07:06.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | testimonial | null | testimonial/wav2vec2-base-timit-demo-colab | 1 | null | transformers | 30,390 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4688
- Wer: 0.3417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4156 | 4.0 | 500 | 1.2721 | 0.8882 |
| 0.6145 | 8.0 | 1000 | 0.4712 | 0.4510 |
| 0.229 | 12.0 | 1500 | 0.4459 | 0.3847 |
| 0.1312 | 16.0 | 2000 | 0.4739 | 0.3786 |
| 0.0897 | 20.0 | 2500 | 0.4483 | 0.3562 |
| 0.0608 | 24.0 | 3000 | 0.4450 | 0.3502 |
| 0.0456 | 28.0 | 3500 | 0.4688 | 0.3417 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
teven/roberta_kelm_tekgen | 86fe78bc97435451a703df31c641f25a5f5dd093 | 2021-11-22T01:04:55.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | teven | null | teven/roberta_kelm_tekgen | 1 | null | sentence-transformers | 30,391 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# teven/roberta_kelm_tekgen
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('teven/roberta_kelm_tekgen')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('teven/roberta_kelm_tekgen')
model = AutoModel.from_pretrained('teven/roberta_kelm_tekgen')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=teven/roberta_kelm_tekgen)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 976035 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 394379 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
[
{
"epochs": 1,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
]
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
thaalesalves/jurandir | 3f83db2fa4a20310472e85e7484a89c995848cb4 | 2021-07-06T01:25:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | thaalesalves | null | thaalesalves/jurandir | 1 | null | transformers | 30,392 | # DialoGPT small - Jurandir
Este é Jurandir, o GPT-2 baseado no DialoGPT que fala português. Ele foi treinado com datasets baseados na Wikipédia e no (Brazilian Portuguese Literature Corpus)[https://www.kaggle.com/rtatman/brazilian-portuguese-literature-corpus]. O propósito deste modelo, inicialmente, é para ser usado com o servidor do KoboldAI em combinação com o bot de Discord [Jurandir](https://github.com/thaalesalves/jurandir). |
thetlwin/DialoGPT-small-ironman | 872c0a188040fbd8792cf316969afa07a639aa93 | 2021-09-14T05:56:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | thetlwin | null | thetlwin/DialoGPT-small-ironman | 1 | null | transformers | 30,393 | ---
tags:
- conversational
---
# Ironman DialoGPT Model (small) |
thorduragust/XLMR-ENIS-finetuned-ner | dfa8391afedee73fb5518b8332ab444b65c0f03a | 2021-10-05T15:40:05.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:mim_gold_ner",
"transformers",
"generated_from_trainer",
"license:agpl-3.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | thorduragust | null | thorduragust/XLMR-ENIS-finetuned-ner | 1 | null | transformers | 30,394 | ---
license: agpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: XLMR-ENIS-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- name: Precision
type: precision
value: 0.8707943925233644
- name: Recall
type: recall
value: 0.8475270039795338
- name: F1
type: f1
value: 0.8590031691155287
- name: Accuracy
type: accuracy
value: 0.982856184128243
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMR-ENIS-finetuned-ner
This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0916
- Precision: 0.8708
- Recall: 0.8475
- F1: 0.8590
- Accuracy: 0.9829
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0581 | 1.0 | 2904 | 0.1055 | 0.8477 | 0.8057 | 0.8262 | 0.9791 |
| 0.0316 | 2.0 | 5808 | 0.0902 | 0.8574 | 0.8349 | 0.8460 | 0.9813 |
| 0.0201 | 3.0 | 8712 | 0.0916 | 0.8708 | 0.8475 | 0.8590 | 0.9829 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
threem/mysquadv2-finetuned-squad | 3f6d92c13c7d11a76a6ced970779dda4c4ff95a3 | 2022-01-08T06:14:47.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | threem | null | threem/mysquadv2-finetuned-squad | 1 | null | transformers | 30,395 | Entry not found |
thyagosme/gpt2-wikitext2 | 09e59fafe472f6cfab14e4e86b67613de0174083 | 2022-02-09T03:17:38.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | thyagosme | null | thyagosme/gpt2-wikitext2 | 1 | null | transformers | 30,396 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5576 | 1.0 | 2249 | 6.4681 |
| 6.1905 | 2.0 | 4498 | 6.1976 |
| 6.0005 | 3.0 | 6747 | 6.1095 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
tiagohatta/opus-mt-de-en-finetuned-de-to-en-first | 792aed25294f00a824e6b90f2a86b416732cef1a | 2021-11-27T13:04:18.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | tiagohatta | null | tiagohatta/opus-mt-de-en-finetuned-de-to-en-first | 1 | null | transformers | 30,397 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-de-en-finetuned-de-to-en-first
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: de-en
metrics:
- name: Bleu
type: bleu
value: 39.8122
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-de-en-finetuned-de-to-en-first
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-de-en](https://huggingface.co/Helsinki-NLP/opus-mt-de-en) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1465
- Bleu: 39.8122
- Gen Len: 25.579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 63 | 1.1465 | 39.8122 | 25.579 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
tiagohatta/opus-mt-de-en-finetuned-de-to-en-second | 55b13f148055618ea69e0d04b69a470dc353e215 | 2021-11-30T17:23:04.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | tiagohatta | null | tiagohatta/opus-mt-de-en-finetuned-de-to-en-second | 1 | null | transformers | 30,398 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-de-en-finetuned-de-to-en-second
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: de-en
metrics:
- name: Bleu
type: bleu
value: 38.959
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-de-en-finetuned-de-to-en-second
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-de-en](https://huggingface.co/Helsinki-NLP/opus-mt-de-en) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1719
- Bleu: 38.959
- Gen Len: 25.2812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 157 | 1.1492 | 39.2552 | 25.2268 |
| No log | 2.0 | 314 | 1.1601 | 38.8343 | 25.2288 |
| No log | 3.0 | 471 | 1.1651 | 39.0092 | 25.254 |
| 1.8512 | 4.0 | 628 | 1.1704 | 38.9281 | 25.2756 |
| 1.8512 | 5.0 | 785 | 1.1719 | 38.959 | 25.2812 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ticet11/DialoGPT-small-BOBBY | f5412f41ef0ae0f339ffff14ea56db9ac33baa89 | 2021-10-01T03:56:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ticet11 | null | ticet11/DialoGPT-small-BOBBY | 1 | null | transformers | 30,399 | ---
tags:
- conversational
---
# Bobby Hill DialoGPT Model |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.