modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
aiface/95k | c153f82f819ed8ef7114e753c2bf294498d5d6e8 | 2022-03-18T02:57:49.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | aiface | null | aiface/95k | 1 | null | transformers | 30,900 | Entry not found |
GleamEyeBeast/common_voice_dataset_model_naive_char | c3274256e2e1af5f0e8d275b399ca87930d427ad | 2022-03-20T18:04:14.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | GleamEyeBeast | null | GleamEyeBeast/common_voice_dataset_model_naive_char | 1 | null | transformers | 30,901 | Entry not found |
brokorli/brokorli_mrc | b021ba5da66effc30464c336926f75bf9a443709 | 2022-03-18T05:30:24.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | brokorli | null | brokorli/brokorli_mrc | 1 | 1 | transformers | 30,902 | Entry not found |
erinchocolate/DialoGPT-small-harrypotter | 287fd76699c03796944da18784612f7ccfec961c | 2022-03-18T08:05:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | erinchocolate | null | erinchocolate/DialoGPT-small-harrypotter | 1 | null | transformers | 30,903 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
cammy/bart-large-cnn-100-MDS-own | cb89877bf65cf0abf2d510e719063abd2c46227e | 2022-03-18T09:32:08.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/bart-large-cnn-100-MDS-own | 1 | null | transformers | 30,904 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-100-MDS-own
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-100-MDS-own
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5357
- Rouge1: 22.4039
- Rouge2: 4.681
- Rougel: 13.1526
- Rougelsum: 15.7986
- Gen Len: 70.3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 25 | 3.3375 | 25.7428 | 6.754 | 16.4131 | 19.6269 | 81.9 |
| No log | 2.0 | 50 | 3.5357 | 22.4039 | 4.681 | 13.1526 | 15.7986 | 70.3 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
brokorli/brokorli_ner | 8cfea0d2d6ddf86e956bc21b81cc84a85a18f09d | 2022-05-30T13:55:36.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | brokorli | null | brokorli/brokorli_ner | 1 | 1 | transformers | 30,905 | Entry not found |
eliasws/openApiT5-to-json-v1 | 6a4fe746b0eaf5a2b459788f5e9c19cb62287338 | 2022-03-18T10:32:22.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | eliasws | null | eliasws/openApiT5-to-json-v1 | 1 | null | transformers | 30,906 | Entry not found |
beston91/gpt2-xl-ft-logits-1k | b700341214eb682995adf44bdafb6dcdbf8a2f26 | 2022-03-19T22:46:27.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | beston91 | null | beston91/gpt2-xl-ft-logits-1k | 1 | null | transformers | 30,907 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-logits-1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-logits-1k
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.91 | 5 | 5.5302 |
| No log | 1.91 | 10 | 5.5310 |
| No log | 2.91 | 15 | 5.5323 |
| No log | 3.91 | 20 | 5.5341 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 17.59481430053711
### Dataset Size
Size: 5000 |
brokorli/brokorli_sm | bc71b4e16de310377a0109f4616fef5350396957 | 2022-05-30T14:13:05.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | brokorli | null | brokorli/brokorli_sm | 1 | 1 | transformers | 30,908 | Entry not found |
facebook/regnet-x-320 | bac84c8ded94a58b07ca1794943141d053fd38c7 | 2022-06-30T10:14:40.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/regnet-x-320 | 1 | null | transformers | 30,909 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
facebook/regnet-y-160 | b0f06bf39fc48a363970806505d48192a0f6d8c9 | 2022-06-28T11:39:06.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/regnet-y-160 | 1 | null | transformers | 30,910 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
Valouzze/FairuvenIA | 66403746287f95c9923bb478658544ec8bd61e8c | 2022-03-18T16:51:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Valouzze | null | Valouzze/FairuvenIA | 1 | null | transformers | 30,911 | ---
tags:
- conversational
---
# My Awesome Model |
IsaacSST/gpt2-xl-ft-d2 | 035134474e29dca338a2c9fa35d7850cb6715d43 | 2022-03-18T20:51:40.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | IsaacSST | null | IsaacSST/gpt2-xl-ft-d2 | 1 | null | transformers | 30,912 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-d2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-d2
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 156 | 1.2309 |
| No log | 2.0 | 312 | 1.2382 |
| No log | 3.0 | 468 | 1.2997 |
| 1.172 | 4.0 | 624 | 1.3483 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
emilygs2/bert-base-uncased-finetuned-genderswap | a70293552c1e4fb2ac1cab8fdd8e5040274258d9 | 2022-03-18T18:35:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | emilygs2 | null | emilygs2/bert-base-uncased-finetuned-genderswap | 1 | null | transformers | 30,913 | Entry not found |
MehSatho/Tai-medium-Hermione | 2fb794900c8f008b8b8afcf1c483cb6581e10707 | 2022-03-18T18:56:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | MehSatho | null | MehSatho/Tai-medium-Hermione | 1 | 1 | transformers | 30,914 | ---
tags:
- conversational
---
|
beston91/gpt2-xl_ft_mult_1k | 26b1169f495d3cc06b3d6a553dc4ea5a546fa43f | 2022-03-19T23:56:20.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | beston91 | null | beston91/gpt2-xl_ft_mult_1k | 1 | null | transformers | 30,915 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_mult_1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_mult_1k
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.91 | 5 | 6.7968 |
| No log | 1.91 | 10 | 6.6621 |
| No log | 2.91 | 15 | 6.4335 |
| No log | 3.91 | 20 | 6.1137 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
beston91/gpt2-xl_ft_mult_5k | 4f2b5d5efce3668eda4ab0e859a93cd64702ecdb | 2022-03-20T17:31:57.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | beston91 | null | beston91/gpt2-xl_ft_mult_5k | 1 | null | transformers | 30,916 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_mult_5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_mult_5k
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.99 | 27 | 6.3035 |
| No log | 1.99 | 54 | 1.2709 |
| No log | 2.99 | 81 | 0.7482 |
| No log | 3.99 | 108 | 0.6758 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 21.267963409423828
### Dataset Size
Size: 5000 |
IsaacSST/gpt2-xl-ft-d3 | 20a2074328da1b814773efafba5cb4045bbc6fd0 | 2022-03-19T15:18:26.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | IsaacSST | null | IsaacSST/gpt2-xl-ft-d3 | 1 | null | transformers | 30,917 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-d3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-d3
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 156 | 1.2135 |
| No log | 2.0 | 312 | 1.2181 |
| No log | 3.0 | 468 | 1.2754 |
| 1.1743 | 4.0 | 624 | 1.3252 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
eliasws/openApiT5-distilled-description-v2 | e12a112a5c35c546b99caf03532fa39b4f2f0331 | 2022-03-19T14:09:15.000Z | [
"pytorch",
"t5",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | eliasws | null | eliasws/openApiT5-distilled-description-v2 | 1 | null | sentence-transformers | 30,918 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4300 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 4300,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': None, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
adalbertojunior/test_en_aligned | 3ae3e669ea457aabfb2a65f14e9706e871507e40 | 2022-03-19T14:31:31.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | adalbertojunior | null | adalbertojunior/test_en_aligned | 1 | null | transformers | 30,919 | Entry not found |
202015004/MY_st1_training_shreya | c4c91c5c450e7ef1d7f49413f281aa183597b614 | 2022-03-19T17:05:46.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | 202015004 | null | 202015004/MY_st1_training_shreya | 1 | null | transformers | 30,920 | Entry not found |
eliasws/openApiT5-to-json-v2 | abba3ec973859800e11e6305b3f5663e24a26221 | 2022-03-19T15:18:23.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | eliasws | null | eliasws/openApiT5-to-json-v2 | 1 | null | transformers | 30,921 | Entry not found |
Ameer05/tokenizer-repo | f4fe6c2ecf593c29f0576e43802d4651cec04109 | 2022-03-19T18:43:06.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Ameer05 | null | Ameer05/tokenizer-repo | 1 | null | transformers | 30,922 | Entry not found |
Aleksandar1932/gpt-neo-125M-metal | 1a24ca93ac2fa012515574fc718d69fbbcddaa46 | 2022-03-19T18:54:56.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | Aleksandar1932 | null | Aleksandar1932/gpt-neo-125M-metal | 1 | null | transformers | 30,923 | Entry not found |
Aleksandar1932/gpt-neo-125M-country | 4e3060bd6127a6dda94ed2a51f55f199cbdec784 | 2022-03-19T19:27:27.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | Aleksandar1932 | null | Aleksandar1932/gpt-neo-125M-country | 1 | null | transformers | 30,924 | Entry not found |
KheireddineDaouadi/AraRoberta | 580544eb2a50a61a4247cab6b2e790d1f18acaf3 | 2022-03-19T19:56:19.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | KheireddineDaouadi | null | KheireddineDaouadi/AraRoberta | 1 | null | transformers | 30,925 | Entry not found |
darthrussel/DialoGPT-small-rickandmorty | 83634b5a280ddf75cc7591a041c871809b711bff | 2022-03-19T21:42:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | darthrussel | null | darthrussel/DialoGPT-small-rickandmorty | 1 | null | transformers | 30,926 | ---
tags:
- conversational
---
# Rick and Morty DialoGPT Model |
willcai/wav2vec2_common_voice_accents_5 | f23f75cbb3f348a72ef4634a6f59d8b22f4be349 | 2022-03-20T07:07:37.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | willcai | null | willcai/wav2vec2_common_voice_accents_5 | 1 | null | transformers | 30,927 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_common_voice_accents_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_common_voice_accents_5
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0027
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 48
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 384
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.4163 | 1.34 | 400 | 0.5520 |
| 0.3305 | 2.68 | 800 | 0.1698 |
| 0.2138 | 4.03 | 1200 | 0.1104 |
| 0.1714 | 5.37 | 1600 | 0.0944 |
| 0.1546 | 6.71 | 2000 | 0.0700 |
| 0.1434 | 8.05 | 2400 | 0.0610 |
| 0.1272 | 9.4 | 2800 | 0.0493 |
| 0.1183 | 10.74 | 3200 | 0.0371 |
| 0.1113 | 12.08 | 3600 | 0.0468 |
| 0.1013 | 13.42 | 4000 | 0.0336 |
| 0.0923 | 14.77 | 4400 | 0.0282 |
| 0.0854 | 16.11 | 4800 | 0.0410 |
| 0.0791 | 17.45 | 5200 | 0.0252 |
| 0.0713 | 18.79 | 5600 | 0.0128 |
| 0.0662 | 20.13 | 6000 | 0.0252 |
| 0.0635 | 21.48 | 6400 | 0.0080 |
| 0.0607 | 22.82 | 6800 | 0.0098 |
| 0.0557 | 24.16 | 7200 | 0.0069 |
| 0.0511 | 25.5 | 7600 | 0.0057 |
| 0.0474 | 26.85 | 8000 | 0.0046 |
| 0.045 | 28.19 | 8400 | 0.0037 |
| 0.0426 | 29.53 | 8800 | 0.0027 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
adalbertojunior/test-gpt2 | b5e14ce7ee9d32c3a24318b58310611f743ce4a1 | 2022-03-20T13:51:46.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | adalbertojunior | null | adalbertojunior/test-gpt2 | 1 | null | transformers | 30,928 | Entry not found |
IsaacSST/gpt2-xl-ft-d4-0.15-n-3 | e66c1824f253f889b4613a8fe4e6c367caa1e3c2 | 2022-03-21T07:29:50.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | IsaacSST | null | IsaacSST/gpt2-xl-ft-d4-0.15-n-3 | 1 | null | transformers | 30,929 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-d4-0.15-n-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-d4-0.15-n-3
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4877
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 156 | 1.3294 |
| No log | 2.0 | 312 | 1.3466 |
| No log | 3.0 | 468 | 1.4295 |
| 1.1304 | 4.0 | 624 | 1.4877 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
tau/fewsion_1024_0.3_3900 | 133450069ac66a0b6823a68ab82017c1c2e283f0 | 2022-03-21T07:27:58.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/fewsion_1024_0.3_3900 | 1 | null | transformers | 30,930 | Entry not found |
PSW/ut-del-two-at-once-ver3 | 7ffce2e235a8bdec876c99449153af4a8957ebbb | 2022-03-21T07:56:40.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/ut-del-two-at-once-ver3 | 1 | null | transformers | 30,931 | Entry not found |
Ameer05/test | d146ed6903c487ef532c76ebaa1c81d2d0988198 | 2022-03-21T09:35:03.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | summarization | false | Ameer05 | null | Ameer05/test | 1 | null | transformers | 30,932 | ---
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [Ameer05/tokenizer-repo](https://huggingface.co/Ameer05/tokenizer-repo) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6109
- Rouge1: 54.9442
- Rouge2: 45.3299
- Rougel: 50.5219
- Rougelsum: 53.6475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 0.91 | 5 | 2.3705 | 53.62 | 44.3835 | 49.6135 | 52.693 |
| No log | 1.91 | 10 | 1.9035 | 47.478 | 37.0934 | 39.7935 | 45.1881 |
| No log | 2.91 | 15 | 1.7990 | 54.2488 | 45.0782 | 49.8421 | 52.7564 |
| No log | 3.91 | 20 | 1.7125 | 55.7903 | 46.7554 | 52.2733 | 54.9389 |
| 2.4456 | 4.91 | 25 | 1.6421 | 52.2279 | 43.4391 | 49.6955 | 51.2915 |
| 2.4456 | 5.91 | 30 | 1.6102 | 55.8598 | 47.3293 | 53.1337 | 54.8596 |
| 2.4456 | 6.91 | 35 | 1.6164 | 53.7902 | 44.6622 | 49.5045 | 52.2304 |
| 2.4456 | 7.91 | 40 | 1.6015 | 51.5597 | 42.0333 | 47.9639 | 50.1154 |
| 1.239 | 8.91 | 45 | 1.6067 | 53.0301 | 43.7214 | 49.0227 | 51.8109 |
| 1.239 | 9.91 | 50 | 1.6109 | 54.9442 | 45.3299 | 50.5219 | 53.6475 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
PSW/ut_del_two_per_each_ver1 | 76d1e631b53efd1ede0140abb0fb278d5e7c8908 | 2022-03-21T09:00:28.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/ut_del_two_per_each_ver1 | 1 | null | transformers | 30,933 | Entry not found |
PSW/ut_del_two_per_each_ver2 | 3265148f5f6a1bbb4db59de9395ba817553799d7 | 2022-03-21T10:01:46.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/ut_del_two_per_each_ver2 | 1 | null | transformers | 30,934 | Entry not found |
PSW/ut_del_two_per_each_ver3 | 6139e73bb6dde99f622596c44627da4c61237d41 | 2022-03-21T12:31:54.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/ut_del_two_per_each_ver3 | 1 | null | transformers | 30,935 | Entry not found |
peterhsu/bert-finetuned-squad | 8e0b6f5229506a3da500115221d5e6be20048e42 | 2022-03-26T08:48:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | peterhsu | null | peterhsu/bert-finetuned-squad | 1 | null | transformers | 30,936 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ScandinavianMrT/gpt2_ONION_prefinetune | 944ee2ede4bc520857480095a30fc71167eb5b52 | 2022-03-21T15:05:41.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | ScandinavianMrT | null | ScandinavianMrT/gpt2_ONION_prefinetune | 1 | null | transformers | 30,937 | Entry not found |
ianMconversica/autonlp-test-654919306 | 7d3f9ca04695a1365ee17bec150a3db0cc876f5a | 2022-03-21T17:29:34.000Z | [
"pytorch",
"t5",
"text2text-generation",
"unk",
"dataset:McIan91/autonlp-data-test",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | ianMconversica | null | ianMconversica/autonlp-test-654919306 | 1 | null | transformers | 30,938 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- McIan91/autonlp-data-test
co2_eq_emissions: 0.7013851565380207
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 654919306
- CO2 Emissions (in grams): 0.7013851565380207
## Validation Metrics
- Loss: 2.5570242404937744
- Rouge1: 72.7273
- Rouge2: 44.4444
- RougeL: 72.7273
- RougeLsum: 72.7273
- Gen Len: 17.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/McIan91/autonlp-test-654919306
``` |
saghar/xtremedistil-l12-h384-uncased-finetuned-wikitext103 | 78b88c38b55f12237e92d22ae9cb2a24bcd56f75 | 2022-03-21T23:47:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"dataset:wikitext",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | saghar | null | saghar/xtremedistil-l12-h384-uncased-finetuned-wikitext103 | 1 | null | transformers | 30,939 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- wikitext
model-index:
- name: xtremedistil-l12-h384-uncased-finetuned-wikitext103
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtremedistil-l12-h384-uncased-finetuned-wikitext103
This model is a fine-tuned version of [microsoft/xtremedistil-l12-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l12-h384-uncased) on the wikitext dataset.
It achieves the following results on the evaluation set:
- Loss: 6.7699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.3467 | 1.0 | 3125 | 6.9197 |
| 6.9751 | 2.0 | 6250 | 6.8061 |
| 6.9142 | 3.0 | 9375 | 6.7699 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.11.0
- Datasets 1.1.1
- Tokenizers 0.10.1
|
elena-soare/bat-table-aug | 95088070f157df171bfb3aff8855b9e5eaee03fa | 2022-06-07T16:15:48.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | elena-soare | null | elena-soare/bat-table-aug | 1 | null | transformers | 30,940 | # Text2SQL Task T5-Base + Fine-tuning on Spider + Table Augumentation
This is our T5 model fine-tuned on Spider using a schema serialization, which includes a table description for injecting domain knowledge into T5
## Running the model
Inspired by the work done by [Picard](https://github.com/ElementAI/picard/) by adding a table description to the question and serialized schema:
```python
[question] | [db_id] | [table] : [column] ( [content] , [content] ) , [column] ( ... ) , [...] | [table] : ... | ... description * [table] : <meaning of table>; [table] : <meaning of table> ; ....
```
|
willcai/wav2vec2_common_voice_accents_indian | bc2d965c6c8526cc1d085d40e6e23878d913ddb4 | 2022-03-22T10:58:05.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | willcai | null | willcai/wav2vec2_common_voice_accents_indian | 1 | 1 | transformers | 30,941 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_common_voice_accents_indian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_common_voice_accents_indian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 48
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 384
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.5186 | 1.28 | 400 | 0.6937 |
| 0.3485 | 2.56 | 800 | 0.2323 |
| 0.2229 | 3.83 | 1200 | 0.2195 |
| 0.1877 | 5.11 | 1600 | 0.2147 |
| 0.1618 | 6.39 | 2000 | 0.2058 |
| 0.1434 | 7.67 | 2400 | 0.2077 |
| 0.132 | 8.95 | 2800 | 0.1995 |
| 0.1223 | 10.22 | 3200 | 0.2146 |
| 0.1153 | 11.5 | 3600 | 0.2117 |
| 0.1061 | 12.78 | 4000 | 0.2071 |
| 0.1003 | 14.06 | 4400 | 0.2219 |
| 0.0949 | 15.34 | 4800 | 0.2204 |
| 0.0889 | 16.61 | 5200 | 0.2162 |
| 0.0824 | 17.89 | 5600 | 0.2243 |
| 0.0784 | 19.17 | 6000 | 0.2323 |
| 0.0702 | 20.45 | 6400 | 0.2325 |
| 0.0665 | 21.73 | 6800 | 0.2334 |
| 0.0626 | 23.0 | 7200 | 0.2411 |
| 0.058 | 24.28 | 7600 | 0.2473 |
| 0.054 | 25.56 | 8000 | 0.2591 |
| 0.0506 | 26.84 | 8400 | 0.2577 |
| 0.0484 | 28.12 | 8800 | 0.2633 |
| 0.0453 | 29.39 | 9200 | 0.2692 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Bistolero/mt5_two_epocs_nl_3100 | 12ae914d21dcb0e156ce8490dc0e20767aa48154 | 2022-03-21T23:29:08.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Bistolero | null | Bistolero/mt5_two_epocs_nl_3100 | 1 | null | transformers | 30,942 | Entry not found |
ggvick/distilgpt2-finetuned-wikitext2 | 25c10407ca1419d5378a616f2bc83dbac0f462e6 | 2022-03-22T02:29:11.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | ggvick | null | ggvick/distilgpt2-finetuned-wikitext2 | 1 | null | transformers | 30,943 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7608 | 1.0 | 2334 | 3.6655 |
| 3.6335 | 2.0 | 4668 | 3.6455 |
| 3.6066 | 3.0 | 7002 | 3.6424 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Bistolero/mix_training_en_du_nl_1 | 4fd8c3e9f2397abff4f6d3fb459dcae6393b9605 | 2022-03-22T02:07:56.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Bistolero | null | Bistolero/mix_training_en_du_nl_1 | 1 | null | transformers | 30,944 | Entry not found |
BigSalmon/InformalToFormalLincoln29 | f679926b1649bd938440d734796036f9c3e9b7f0 | 2022-03-22T03:35:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln29 | 1 | null | transformers | 30,945 | ```
original: chrome extensions [MASK] accomplish everyday tasks.
infill: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
original: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
original:
``` |
202015004/MY_st1_training_shreya_fixed_22_march | 824cc9476b30f9bb086fab66f7ee23c481ca09e2 | 2022-03-22T11:07:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | 202015004 | null | 202015004/MY_st1_training_shreya_fixed_22_march | 1 | null | transformers | 30,946 | Entry not found |
tau/fewsion_2_1024_0.3_epoch2 | 2c526402d6fa4cf5a9cb94007dd2ea415fee5690 | 2022-03-22T10:38:54.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/fewsion_2_1024_0.3_epoch2 | 1 | null | transformers | 30,947 | Entry not found |
tau/pegasus_1024_0.3_epoch2_v2 | 2691a2d8d28d355e90e7e548985f468ce46d39ba | 2022-03-22T10:47:20.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/pegasus_1024_0.3_epoch2_v2 | 1 | null | transformers | 30,948 | Entry not found |
Dahn/wav2vec2-large-xls-r-300m-turkish-colab | ef1d562c04d6223aaa4035cb7fda1277f821c80b | 2022-03-22T17:29:07.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Dahn | null | Dahn/wav2vec2-large-xls-r-300m-turkish-colab | 1 | null | transformers | 30,949 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3965
- Wer: 0.3807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.974 | 3.67 | 400 | 0.7102 | 0.7318 |
| 0.4216 | 7.34 | 800 | 0.4273 | 0.4941 |
| 0.1891 | 11.01 | 1200 | 0.4548 | 0.4864 |
| 0.1267 | 14.68 | 1600 | 0.4208 | 0.4082 |
| 0.0958 | 18.35 | 2000 | 0.4236 | 0.4033 |
| 0.0799 | 22.02 | 2400 | 0.4052 | 0.3829 |
| 0.0624 | 25.69 | 2800 | 0.4088 | 0.3875 |
| 0.0491 | 29.36 | 3200 | 0.3965 | 0.3807 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
edwardjross/xlm-roberta-base-finetuned-panx-fr | 72064423f199b50e1091ef232e5730f51be090d8 | 2022-03-22T13:27:23.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | edwardjross | null | edwardjross/xlm-roberta-base-finetuned-panx-fr | 1 | null | transformers | 30,950 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8330262937531401
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2961
- F1: 0.8330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5464 | 1.0 | 287 | 0.3304 | 0.7912 |
| 0.2617 | 2.0 | 574 | 0.2995 | 0.8142 |
| 0.1672 | 3.0 | 861 | 0.2961 | 0.8330 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
beston91/gpt2-xl_ft_logits_25k | f4529f2757d86e0c5a30e28cc3202af9e54ae8a8 | 2022-03-24T12:59:29.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | beston91 | null | beston91/gpt2-xl_ft_logits_25k | 1 | null | transformers | 30,951 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_logits_25k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_logits_25k
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.99 | 136 | 6.2712 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 17.583023071289062 |
willcai/wav2vec2_common_voice_accents_us | 8ac0415daa08172c9a6b39bb97c26169746c454d | 2022-03-23T11:03:06.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | willcai | null | willcai/wav2vec2_common_voice_accents_us | 1 | null | transformers | 30,952 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_common_voice_accents_us
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_common_voice_accents_us
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 48
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 384
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.549 | 1.28 | 400 | 0.8521 |
| 0.4066 | 2.56 | 800 | 0.2407 |
| 0.2262 | 3.83 | 1200 | 0.2070 |
| 0.1828 | 5.11 | 1600 | 0.2134 |
| 0.1565 | 6.39 | 2000 | 0.2060 |
| 0.1448 | 7.67 | 2400 | 0.2100 |
| 0.1333 | 8.95 | 2800 | 0.2036 |
| 0.121 | 10.22 | 3200 | 0.2192 |
| 0.1146 | 11.5 | 3600 | 0.2154 |
| 0.1108 | 12.78 | 4000 | 0.2223 |
| 0.1017 | 14.06 | 4400 | 0.2331 |
| 0.094 | 15.34 | 4800 | 0.2257 |
| 0.0896 | 16.61 | 5200 | 0.2229 |
| 0.0825 | 17.89 | 5600 | 0.2229 |
| 0.0777 | 19.17 | 6000 | 0.2417 |
| 0.0719 | 20.45 | 6400 | 0.2433 |
| 0.0659 | 21.73 | 6800 | 0.2447 |
| 0.0651 | 23.0 | 7200 | 0.2446 |
| 0.0587 | 24.28 | 7600 | 0.2542 |
| 0.056 | 25.56 | 8000 | 0.2587 |
| 0.0521 | 26.84 | 8400 | 0.2640 |
| 0.0494 | 28.12 | 8800 | 0.2753 |
| 0.0465 | 29.39 | 9200 | 0.2722 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
willcai/wav2vec2_common_voice_accents_scotland | 4562fa4142d174f54259ead8c5e5d0422ec0f870 | 2022-03-23T11:15:11.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | willcai | null | willcai/wav2vec2_common_voice_accents_scotland | 1 | null | transformers | 30,953 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_common_voice_accents_scotland
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_common_voice_accents_scotland
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2752
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 48
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 384
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.7171 | 1.28 | 400 | 1.1618 |
| 0.4391 | 2.56 | 800 | 0.2422 |
| 0.2259 | 3.83 | 1200 | 0.2071 |
| 0.1813 | 5.11 | 1600 | 0.2126 |
| 0.1531 | 6.39 | 2000 | 0.2010 |
| 0.1383 | 7.67 | 2400 | 0.2004 |
| 0.13 | 8.95 | 2800 | 0.2069 |
| 0.1193 | 10.22 | 3200 | 0.2081 |
| 0.1124 | 11.5 | 3600 | 0.2051 |
| 0.1023 | 12.78 | 4000 | 0.2175 |
| 0.097 | 14.06 | 4400 | 0.2261 |
| 0.0863 | 15.34 | 4800 | 0.2301 |
| 0.0823 | 16.61 | 5200 | 0.2334 |
| 0.079 | 17.89 | 5600 | 0.2252 |
| 0.0743 | 19.17 | 6000 | 0.2393 |
| 0.0696 | 20.45 | 6400 | 0.2481 |
| 0.0644 | 21.73 | 6800 | 0.2416 |
| 0.064 | 23.0 | 7200 | 0.2449 |
| 0.0584 | 24.28 | 7600 | 0.2660 |
| 0.0544 | 25.56 | 8000 | 0.2630 |
| 0.0523 | 26.84 | 8400 | 0.2677 |
| 0.0494 | 28.12 | 8800 | 0.2730 |
| 0.0462 | 29.39 | 9200 | 0.2752 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
rahulkuruvilla/COVID-DistilBERTa | 817644da036ff4eb4a3835ba2bc0940ee68972b0 | 2022-03-22T21:28:47.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | rahulkuruvilla | null | rahulkuruvilla/COVID-DistilBERTa | 1 | null | transformers | 30,954 | Entry not found |
202015004/MY_st1_training_shreya_fixed_23_march_unlabled_training | b0b9d13b69c63d62371849af2677e243becbbfa6 | 2022-03-23T01:31:44.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | 202015004 | null | 202015004/MY_st1_training_shreya_fixed_23_march_unlabled_training | 1 | null | transformers | 30,955 | Entry not found |
PSW/ut_del_three_per_each_ver3 | b4e5c26302d1995fce7363113b122deab3bacd75 | 2022-03-23T06:21:48.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/ut_del_three_per_each_ver3 | 1 | null | transformers | 30,956 | Entry not found |
PSW/ut_del_three_per_each_ver4 | ef0d112aa608365e06cb52abea05948d757039cd | 2022-03-23T07:52:06.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/ut_del_three_per_each_ver4 | 1 | null | transformers | 30,957 | Entry not found |
PSW/ut_del_three_per_each_ver5 | 7d097ff72d7e02e576f41548637a1a32bbc1849e | 2022-03-23T09:10:40.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/ut_del_three_per_each_ver5 | 1 | null | transformers | 30,958 | Entry not found |
tau/random_single_mask_1024_0.3_epoch1 | 06df59aa3afcc5f9b290d30fd68ce056191d8455 | 2022-03-23T12:17:47.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/random_single_mask_1024_0.3_epoch1 | 1 | null | transformers | 30,959 | Entry not found |
PSW/ut_del_n_per_each_ver1 | d370f60f2dee072004a8c838425695db69cc726b | 2022-03-23T14:31:20.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/ut_del_n_per_each_ver1 | 1 | null | transformers | 30,960 | Entry not found |
202015004/My_st1_training_shreya_fixed_23_march_2 | 899d94c98aef2c1e438d1ad4dd49bbb8c04eb2f6 | 2022-03-23T19:09:39.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | 202015004 | null | 202015004/My_st1_training_shreya_fixed_23_march_2 | 1 | null | transformers | 30,961 | Entry not found |
negfir/uncased_L-12_H-128_A-2 | 8abe4882cb1772793e9ae7442f406a7edea25de0 | 2022-03-23T19:18:33.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"transformers",
"generated_from_keras_callback",
"model-index"
] | null | false | negfir | null | negfir/uncased_L-12_H-128_A-2 | 1 | null | transformers | 30,962 | ---
tags:
- generated_from_keras_callback
model-index:
- name: uncased_L-12_H-128_A-2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# uncased_L-12_H-128_A-2
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Bistolero/it_train_all | f38b05c2857e8c737043fdd35add654119ba5250 | 2022-03-23T20:28:20.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Bistolero | null | Bistolero/it_train_all | 1 | null | transformers | 30,963 | Entry not found |
simonnedved/codet5-base | 9e3e2ebac8c470a696147edd46f7135052d226c6 | 2022-03-24T06:57:59.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"dis2py",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | simonnedved | null | simonnedved/codet5-base | 1 | null | transformers | 30,964 | ---
license: apache-2.0
tags:
- dis2py
- generated_from_trainer
model-index:
- name: codet5-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codet5-base
This model is a fine-tuned version of [Salesforce/codet5-base](https://huggingface.co/Salesforce/codet5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
modhp/wav2vec2-model1-torgo | 894d9f80426fc985d645cd85f65925689889166b | 2022-04-08T20:12:08.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | modhp | null | modhp/wav2vec2-model1-torgo | 1 | null | transformers | 30,965 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-model1-torgo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-model1-torgo
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 1.18.3
- Tokenizers 0.11.6
|
enimai/mt5-mustc-fr | 398a6eb77f41023ffca1e871dfb29696cd344750 | 2022-03-24T07:30:36.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | enimai | null | enimai/mt5-mustc-fr | 1 | null | transformers | 30,966 | ---
license: apache-2.0
---
|
202015004/MY_st1_training_shreya_fixed_24_march | 553055a074945ff29be017d852973869da93abbb | 2022-03-24T08:34:14.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | 202015004 | null | 202015004/MY_st1_training_shreya_fixed_24_march | 1 | null | transformers | 30,967 | Entry not found |
zuppif/resnet-d-34 | 31c63b163f49af61f79db735fe4e0835444b44b4 | 2022-03-24T08:59:13.000Z | [
"pytorch",
"resnetd",
"transformers"
] | null | false | zuppif | null | zuppif/resnet-d-34 | 1 | null | transformers | 30,968 | Entry not found |
zuppif/resnet-d-101 | 579f1cceea82c576675d03790d10e9fd4ff79e1e | 2022-03-24T09:01:44.000Z | [
"pytorch",
"resnetd",
"transformers"
] | null | false | zuppif | null | zuppif/resnet-d-101 | 1 | null | transformers | 30,969 | Entry not found |
zuppif/resnet-d-152 | 17522072882014386894e6b699c893b93e40cd6f | 2022-03-24T09:03:30.000Z | [
"pytorch",
"resnetd",
"transformers"
] | null | false | zuppif | null | zuppif/resnet-d-152 | 1 | null | transformers | 30,970 | Entry not found |
Khalsuu/wav2vec2-large-xls-r-300m-turkish-colab | 878ae17b6a72800ec99bf3a5ca9814803ddcfef3 | 2022-03-24T14:00:33.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Khalsuu | null | Khalsuu/wav2vec2-large-xls-r-300m-turkish-colab | 1 | null | transformers | 30,971 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3631
- Wer: 0.3907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.2448 | 7.4 | 400 | 0.5564 | 0.5914 |
| 0.2245 | 14.81 | 800 | 0.3631 | 0.3907 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
PSW/ut_del_two_at_once_ver1_early_stopping | 8fe66aadcee1e8c307e83be412ac5b56422b6930 | 2022-03-24T11:54:58.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/ut_del_two_at_once_ver1_early_stopping | 1 | null | transformers | 30,972 | Entry not found |
Helsinki-NLP/opus-mt-tc-big-zle-es | 06cbe533035fc3d2efbaf221974a1c07ff1bed78 | 2022-06-01T13:09:20.000Z | [
"pytorch",
"marian",
"text2text-generation",
"be",
"es",
"ru",
"rue",
"uk",
"zle",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-zle-es | 1 | null | transformers | 30,973 | ---
language:
- be
- es
- ru
- rue
- uk
- zle
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-zle-es
results:
- task:
name: Translation rus-spa
type: translation
args: rus-spa
dataset:
name: flores101-devtest
type: flores_101
args: rus spa devtest
metrics:
- name: BLEU
type: bleu
value: 22.5
- task:
name: Translation ukr-spa
type: translation
args: ukr-spa
dataset:
name: flores101-devtest
type: flores_101
args: ukr spa devtest
metrics:
- name: BLEU
type: bleu
value: 22.7
- task:
name: Translation bel-spa
type: translation
args: bel-spa
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: bel-spa
metrics:
- name: BLEU
type: bleu
value: 46.3
- task:
name: Translation rus-spa
type: translation
args: rus-spa
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: rus-spa
metrics:
- name: BLEU
type: bleu
value: 52.3
- task:
name: Translation ukr-spa
type: translation
args: ukr-spa
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ukr-spa
metrics:
- name: BLEU
type: bleu
value: 51.6
- task:
name: Translation rus-spa
type: translation
args: rus-spa
dataset:
name: newstest2012
type: wmt-2012-news
args: rus-spa
metrics:
- name: BLEU
type: bleu
value: 29.0
- task:
name: Translation rus-spa
type: translation
args: rus-spa
dataset:
name: newstest2013
type: wmt-2013-news
args: rus-spa
metrics:
- name: BLEU
type: bleu
value: 31.7
---
# opus-mt-tc-big-zle-es
Neural machine translation model for translating from East Slavic languages (zle) to Spanish (es).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-23
* source language(s): bel rue rus ukr
* target language(s): spa
* model: transformer-big
* data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807_transformer-big_2022-03-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-spa/opusTCv20210807_transformer-big_2022-03-23.zip)
* more information released models: [OPUS-MT zle-spa README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-spa/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Том був п'яничкою.",
"Он достаточно взрослый, чтобы путешествовать одному."
]
model_name = "pytorch-models/opus-mt-tc-big-zle-es"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Tom era un borracho.
# Es lo suficientemente mayor como para viajar solo.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-es")
print(pipe("Том був п'яничкою."))
# expected output: Tom era un borracho.
```
## Benchmarks
* test set translations: [opusTCv20210807_transformer-big_2022-03-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-spa/opusTCv20210807_transformer-big_2022-03-23.test.txt)
* test set scores: [opusTCv20210807_transformer-big_2022-03-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-spa/opusTCv20210807_transformer-big_2022-03-23.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| bel-spa | tatoeba-test-v2021-08-07 | 0.65523 | 46.3 | 205 | 1412 |
| rus-spa | tatoeba-test-v2021-08-07 | 0.69933 | 52.3 | 10506 | 75246 |
| ukr-spa | tatoeba-test-v2021-08-07 | 0.68862 | 51.6 | 10115 | 59284 |
| bel-spa | flores101-devtest | 0.44744 | 14.1 | 1012 | 29199 |
| rus-spa | flores101-devtest | 0.50880 | 22.5 | 1012 | 29199 |
| ukr-spa | flores101-devtest | 0.50943 | 22.7 | 1012 | 29199 |
| rus-spa | newstest2012 | 0.55185 | 29.0 | 3003 | 79006 |
| rus-spa | newstest2013 | 0.56826 | 31.7 | 3000 | 70528 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 00:12:49 EET 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-tc-big-it-zle | 7a2d527e3ddfbc6ffc75f81712451b89d47407ea | 2022-06-01T13:08:33.000Z | [
"pytorch",
"marian",
"text2text-generation",
"be",
"it",
"ru",
"uk",
"zle",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-it-zle | 1 | null | transformers | 30,974 | ---
language:
- be
- it
- ru
- uk
- zle
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-it-zle
results:
- task:
name: Translation ita-rus
type: translation
args: ita-rus
dataset:
name: flores101-devtest
type: flores_101
args: ita rus devtest
metrics:
- name: BLEU
type: bleu
value: 21.3
- task:
name: Translation ita-bel
type: translation
args: ita-bel
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ita-bel
metrics:
- name: BLEU
type: bleu
value: 33.3
- task:
name: Translation ita-rus
type: translation
args: ita-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ita-rus
metrics:
- name: BLEU
type: bleu
value: 46.7
- task:
name: Translation ita-ukr
type: translation
args: ita-ukr
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ita-ukr
metrics:
- name: BLEU
type: bleu
value: 48.4
---
# opus-mt-tc-big-it-zle
Neural machine translation model for translating from Italian (it) to East Slavic languages (zle).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-23
* source language(s): ita
* target language(s): bel rus ukr
* valid target language labels: >>bel<< >>rus<< >>ukr<<
* model: transformer-big
* data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807_transformer-big_2022-03-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-zle/opusTCv20210807_transformer-big_2022-03-23.zip)
* more information released models: [OPUS-MT ita-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-zle/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>bel<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>ukr<< Alcune cose non cambiano mai.",
">>rus<< Puoi sederti."
]
model_name = "pytorch-models/opus-mt-tc-big-it-zle"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Деякі речі ніколи не змінюються.
# Можешь присесть.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-it-zle")
print(pipe(">>ukr<< Alcune cose non cambiano mai."))
# expected output: Деякі речі ніколи не змінюються.
```
## Benchmarks
* test set translations: [opusTCv20210807_transformer-big_2022-03-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-zle/opusTCv20210807_transformer-big_2022-03-23.test.txt)
* test set scores: [opusTCv20210807_transformer-big_2022-03-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-zle/opusTCv20210807_transformer-big_2022-03-23.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| ita-bel | tatoeba-test-v2021-08-07 | 0.55727 | 33.3 | 264 | 1513 |
| ita-rus | tatoeba-test-v2021-08-07 | 0.66083 | 46.7 | 10045 | 65968 |
| ita-ukr | tatoeba-test-v2021-08-07 | 0.67674 | 48.4 | 5000 | 25353 |
| ita-rus | flores101-devtest | 0.50323 | 21.3 | 1012 | 23295 |
| ita-ukr | flores101-devtest | 0.47658 | 18.3 | 1012 | 22810 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 02:49:36 EET 2022
* port machine: LM0-400-22516.local
|
negfir/bert_uncased_L-10_H-768_A-12_new | 3e82b88a03fb81a284425b4bc68cdd75c88d7c1d | 2022-03-30T21:25:46.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-10_H-768_A-12_new | 1 | null | transformers | 30,975 | Entry not found |
VRT/mT5Small_mBartTokenizer_5epoch | d87957738d6bb0960e8e0891b486b2b42fa0906b | 2022-03-28T07:31:04.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | VRT | null | VRT/mT5Small_mBartTokenizer_5epoch | 1 | null | transformers | 30,976 | Entry not found |
MolePatrol/DialoGPT-Medium-ConnerBot | 10d22568446985471084f82be290822438035da3 | 2022-03-24T16:42:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | MolePatrol | null | MolePatrol/DialoGPT-Medium-ConnerBot | 1 | null | transformers | 30,977 | ---
tags:
- conversational
---
# ConnerBot DialoGPT Model |
202015004/MY_st1_training_shreya_fixed_24_march_labled-decoded | ef388ceb76cf92cb39e9a35a0cfe7d4ef04c2d31 | 2022-03-24T20:19:02.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | 202015004 | null | 202015004/MY_st1_training_shreya_fixed_24_march_labled-decoded | 1 | null | transformers | 30,978 | Entry not found |
pere/tt5-small | 80939cf7d1866f888dac1183861512daa43083dd | 2022-03-24T20:52:01.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | pere | null | pere/tt5-small | 1 | null | transformers | 30,979 | Entry not found |
IsaacSST/gpt2-xl-ft-value_it-1k-0_on_1k-1 | 2120f46475aad9f366399a7c01cb19275ea99551 | 2022-03-24T22:57:07.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | IsaacSST | null | IsaacSST/gpt2-xl-ft-value_it-1k-0_on_1k-1 | 1 | null | transformers | 30,980 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-value_it-1k-0_on_1k-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-value_it-1k-0_on_1k-1
This model is a fine-tuned version of [newtonkwan/gpt2-xl-ft-0](https://huggingface.co/newtonkwan/gpt2-xl-ft-0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.96 | 3 | 1.9325 |
| No log | 1.96 | 6 | 1.9178 |
| No log | 2.96 | 9 | 1.8947 |
| No log | 3.96 | 12 | 1.8666 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 17.54938316345215 |
Tejas21/Totto_t5_base_pt_bleu_10k_steps | 82cba0cb2270d29dd702435adb95504a282c4a08 | 2022-04-21T18:36:30.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | Tejas21 | null | Tejas21/Totto_t5_base_pt_bleu_10k_steps | 1 | null | transformers | 30,981 | ---
license: apache-2.0
---
language:
- en
tags:
- Table to text
- Data to text
## Dataset:
- [ToTTo](https://github.com/google-research-datasets/ToTTo)
A Controlled Table-to-Text Dataset. Totto is an open-source table-to-text dataset with over 1,20,000 examples in the English language. It defines a controlled generation task as: given a Wikipedia table and a set of highlighted cells, generate a one-sentence description.
## Base Model - T5-Base
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
The T5 was built by the Google team in order to create a general-purpose model that can understand the text. The basic idea behind t5 was to deal with the text processing problem as a “text-to-text” problem, i.e. taking the text as input and producing new text as output.
## Baseline Preprocessing
[Baseline Preprocessing](https://github.com/google-research/language/tree/master/language/totto)
This code repository serves as a supplementary for the main repository, which can be used to do basic preprocessing of the Totto dataset.
## Fine-tuning
We used the T5 for the conditional generation model to fine-tune with, 10000 steps with the ToTTo dataset using BLEU as a metric.
|
MolePatrol/DialoGPT-Medium-MoleBot | fc64a52821b3350ddf443f7cf49f976acf819b36 | 2022-03-25T01:22:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | MolePatrol | null | MolePatrol/DialoGPT-Medium-MoleBot | 1 | null | transformers | 30,982 | ---
tags:
- conversational
---
# My Awesome Model
|
scasutt/wav2vec2-base_toy_train_data_augment_0.1 | df93bf5f18033f1633f269241189d91cd6dcaa6d | 2022-03-25T17:44:40.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | scasutt | null | scasutt/wav2vec2-base_toy_train_data_augment_0.1 | 1 | null | transformers | 30,983 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base_toy_train_data_augment_0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_toy_train_data_augment_0.1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3786
- Wer: 0.9954
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1342 | 1.05 | 250 | 3.3901 | 0.9954 |
| 3.0878 | 2.1 | 500 | 3.4886 | 0.9954 |
| 3.0755 | 3.15 | 750 | 3.4616 | 0.9954 |
| 3.0891 | 4.2 | 1000 | 3.5316 | 0.9954 |
| 3.0724 | 5.25 | 1250 | 3.2608 | 0.9954 |
| 3.0443 | 6.3 | 1500 | 3.3881 | 0.9954 |
| 3.0421 | 7.35 | 1750 | 3.4507 | 0.9954 |
| 3.0448 | 8.4 | 2000 | 3.4525 | 0.9954 |
| 3.0455 | 9.45 | 2250 | 3.3342 | 0.9954 |
| 3.0425 | 10.5 | 2500 | 3.3385 | 0.9954 |
| 3.0457 | 11.55 | 2750 | 3.4411 | 0.9954 |
| 3.0375 | 12.6 | 3000 | 3.4459 | 0.9954 |
| 3.0459 | 13.65 | 3250 | 3.3883 | 0.9954 |
| 3.0455 | 14.7 | 3500 | 3.3417 | 0.9954 |
| 3.0524 | 15.75 | 3750 | 3.3908 | 0.9954 |
| 3.0443 | 16.81 | 4000 | 3.3932 | 0.9954 |
| 3.0446 | 17.86 | 4250 | 3.4052 | 0.9954 |
| 3.0412 | 18.91 | 4500 | 3.3776 | 0.9954 |
| 3.0358 | 19.96 | 4750 | 3.3786 | 0.9954 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
PSW/ut_del_three_per_each_ver2_early_stop | 80dade84e61ffe9d70aefa438bca8e2348608426 | 2022-03-25T16:00:28.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/ut_del_three_per_each_ver2_early_stop | 1 | null | transformers | 30,984 | Entry not found |
calebcsjm/reversed_harrypotter_generation | 2a6a17fd2d8f131997712fb58f413908a9ff4aa9 | 2022-03-26T05:02:52.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | calebcsjm | null | calebcsjm/reversed_harrypotter_generation | 1 | null | transformers | 30,985 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: reversed_harrypotter_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reversed_harrypotter_generation
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
peterhsu/bert-finetuned-squad-accelerate | 52183ebce0101c06c72cd6bfca8ece30bf1864b0 | 2022-03-26T19:34:28.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | peterhsu | null | peterhsu/bert-finetuned-squad-accelerate | 1 | null | transformers | 30,986 | Entry not found |
scasutt/wav2vec2-base_toy_train_data_masked_audio_10ms | 36f585454421b1edcd7a88a448f3f0eed4f7d246 | 2022-03-26T14:57:09.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | scasutt | null | scasutt/wav2vec2-base_toy_train_data_masked_audio_10ms | 1 | null | transformers | 30,987 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base_toy_train_data_masked_audio_10ms
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_toy_train_data_masked_audio_10ms
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2477
- Wer: 0.7145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1337 | 1.05 | 250 | 3.4081 | 0.9982 |
| 3.0792 | 2.1 | 500 | 3.2446 | 0.9982 |
| 2.0577 | 3.15 | 750 | 1.5839 | 0.9492 |
| 1.3639 | 4.2 | 1000 | 1.3279 | 0.8798 |
| 1.0814 | 5.25 | 1250 | 1.1629 | 0.8294 |
| 0.8722 | 6.3 | 1500 | 1.1305 | 0.8140 |
| 0.7602 | 7.35 | 1750 | 1.1241 | 0.7972 |
| 0.6982 | 8.4 | 2000 | 1.1429 | 0.7780 |
| 0.6494 | 9.45 | 2250 | 1.1047 | 0.7620 |
| 0.5924 | 10.5 | 2500 | 1.1756 | 0.7649 |
| 0.5385 | 11.55 | 2750 | 1.2230 | 0.7736 |
| 0.5026 | 12.6 | 3000 | 1.1783 | 0.7472 |
| 0.4973 | 13.65 | 3250 | 1.1613 | 0.7287 |
| 0.4726 | 14.7 | 3500 | 1.1923 | 0.7345 |
| 0.4521 | 15.75 | 3750 | 1.2153 | 0.7171 |
| 0.4552 | 16.81 | 4000 | 1.2485 | 0.7226 |
| 0.422 | 17.86 | 4250 | 1.2664 | 0.7240 |
| 0.3708 | 18.91 | 4500 | 1.2352 | 0.7148 |
| 0.3516 | 19.96 | 4750 | 1.2477 | 0.7145 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
202015004/MY_st1_training_shreya_fixed_26_march_labled-decoded | 56a404feb6da4a3888f150fb951c8413fc3d4f39 | 2022-03-27T00:48:28.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | 202015004 | null | 202015004/MY_st1_training_shreya_fixed_26_march_labled-decoded | 1 | null | transformers | 30,988 | Entry not found |
scasutt/wav2vec2-base_toy_train_data_random_noise_0.1 | a2ce32c38437d797989fc7111c0e00f3a97f3139 | 2022-03-27T00:13:42.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | scasutt | null | scasutt/wav2vec2-base_toy_train_data_random_noise_0.1 | 1 | null | transformers | 30,989 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base_toy_train_data_random_noise_0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_toy_train_data_random_noise_0.1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9263
- Wer: 0.7213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1296 | 2.1 | 250 | 3.5088 | 1.0 |
| 3.0728 | 4.2 | 500 | 3.1694 | 1.0 |
| 1.8686 | 6.3 | 750 | 1.3414 | 0.9321 |
| 1.1241 | 8.4 | 1000 | 1.0196 | 0.8321 |
| 0.8704 | 10.5 | 1250 | 0.9387 | 0.7962 |
| 0.6734 | 12.6 | 1500 | 0.9309 | 0.7640 |
| 0.5832 | 14.7 | 1750 | 0.9329 | 0.7346 |
| 0.5207 | 16.8 | 2000 | 0.9060 | 0.7247 |
| 0.4857 | 18.9 | 2250 | 0.9263 | 0.7213 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
TheDaydreamer/ricky | 31babcb06173cfe20af2c9f485758b2fe94e55a3 | 2022-03-26T22:37:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | TheDaydreamer | null | TheDaydreamer/ricky | 1 | null | transformers | 30,990 | ---
tags:
- conversational
---
# Rick |
ArkanDash/DialoGPT-small-emilia | 2d0c035189d69204f1d4aa03bc38ce6055308808 | 2022-03-30T07:54:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ArkanDash | null | ArkanDash/DialoGPT-small-emilia | 1 | null | transformers | 30,991 | ---
tags:
- conversational
---
# Emilia DialogGPT Model |
willcai/wav2vec2_common_voice_accents_indian_only_rerun | ccea57f2d2a915a6158b9e6142d7c19418aeea4d | 2022-03-27T18:00:16.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | willcai | null | willcai/wav2vec2_common_voice_accents_indian_only_rerun | 1 | null | transformers | 30,992 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_common_voice_accents_indian_only_rerun
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_common_voice_accents_indian_only_rerun
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 48
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 384
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 588
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.6205 | 25.0 | 400 | 1.4584 |
| 0.3427 | 50.0 | 800 | 1.8377 |
| 0.1213 | 75.0 | 1200 | 1.6086 |
| 0.0643 | 100.0 | 1600 | 1.5136 |
| 0.0433 | 125.0 | 2000 | 1.4882 |
| 0.0323 | 150.0 | 2400 | 1.2204 |
| 0.0265 | 175.0 | 2800 | 1.3034 |
| 0.0206 | 200.0 | 3200 | 1.2866 |
| 0.0191 | 225.0 | 3600 | 1.2337 |
| 0.0148 | 250.0 | 4000 | 1.1729 |
| 0.0121 | 275.0 | 4400 | 1.2059 |
| 0.0105 | 300.0 | 4800 | 1.1246 |
| 0.01 | 325.0 | 5200 | 1.1397 |
| 0.0098 | 350.0 | 5600 | 1.1684 |
| 0.0073 | 375.0 | 6000 | 1.1030 |
| 0.0061 | 400.0 | 6400 | 1.2077 |
| 0.0049 | 425.0 | 6800 | 1.2653 |
| 0.0044 | 450.0 | 7200 | 1.1587 |
| 0.0037 | 475.0 | 7600 | 1.2283 |
| 0.0033 | 500.0 | 8000 | 1.1897 |
| 0.0026 | 525.0 | 8400 | 1.2633 |
| 0.0023 | 550.0 | 8800 | 1.2571 |
| 0.002 | 575.0 | 9200 | 1.2807 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Danik51002/Example | 3a510c2864b3e8cb413b2e50080e002f16b26952 | 2022-03-27T08:55:29.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | Danik51002 | null | Danik51002/Example | 1 | null | transformers | 30,993 | ---
tags:
- generated_from_trainer
model-index:
- name: Example
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Example
This model is a fine-tuned version of [sberbank-ai/rugpt3small_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3small_based_on_gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 42
- eval_batch_size: 42
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 840
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- num_epochs: 300
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.6
|
scasutt/wav2vec2-large-xlsr-53_toy_train_data | adc9b9f540f8441d4845ca356580a9e8328af84e | 2022-03-27T11:32:48.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | scasutt | null | scasutt/wav2vec2-large-xlsr-53_toy_train_data | 1 | null | transformers | 30,994 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_toy_train_data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_toy_train_data
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6357
- Wer: 0.5496
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6073 | 2.1 | 250 | 3.5111 | 1.0 |
| 3.0828 | 4.2 | 500 | 3.5133 | 1.0 |
| 1.9969 | 6.3 | 750 | 1.3924 | 0.9577 |
| 0.9279 | 8.4 | 1000 | 0.8378 | 0.7243 |
| 0.6692 | 10.5 | 1250 | 0.7367 | 0.6394 |
| 0.5273 | 12.6 | 1500 | 0.6703 | 0.5907 |
| 0.4314 | 14.7 | 1750 | 0.6594 | 0.5718 |
| 0.3809 | 16.8 | 2000 | 0.6138 | 0.5559 |
| 0.3934 | 18.9 | 2250 | 0.6357 | 0.5496 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
jorge-henao/gpt2-small-spanish-disco-poetry | ffc330e0430e0ce045620e98c0d442453f624945 | 2022-03-29T04:06:39.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | jorge-henao | null | jorge-henao/gpt2-small-spanish-disco-poetry | 1 | null | transformers | 30,995 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt2-small-spanish-disco-poetry
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-small-spanish-disco-poetry
This model is a fine-tuned version of [datificate/gpt2-small-spanish](https://huggingface.co/datificate/gpt2-small-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2471
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.7329 | 1.0 | 750 | 4.4635 |
| 4.4445 | 2.0 | 1500 | 4.3703 |
| 4.3344 | 3.0 | 2250 | 4.3262 |
| 4.2352 | 4.0 | 3000 | 4.3045 |
| 4.1714 | 5.0 | 3750 | 4.2821 |
| 4.1034 | 6.0 | 4500 | 4.2619 |
| 4.0668 | 7.0 | 5250 | 4.2554 |
| 4.0322 | 8.0 | 6000 | 4.2515 |
| 4.0163 | 9.0 | 6750 | 4.2489 |
| 4.0011 | 10.0 | 7500 | 4.2471 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
BeamBee/DialoGPT-small-Lavenza | cfa983c8336e02a36425e10172f3c59adbad4141 | 2022-03-27T19:41:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | BeamBee | null | BeamBee/DialoGPT-small-Lavenza | 1 | null | transformers | 30,996 | ---
tags:
- conversational
---
# Lavenza DialoGPT Model |
theResearchNinja/Cybonto-distilbert-base-uncased-finetuned-ner-v0.1 | 275e19282c811dbf51a9fd54a85770feb9582de7 | 2022-03-27T21:51:10.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:few_nerd",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | theResearchNinja | null | theResearchNinja/Cybonto-distilbert-base-uncased-finetuned-ner-v0.1 | 1 | null | transformers | 30,997 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- few_nerd
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Cybonto-distilbert-base-uncased-finetuned-ner-v0.1
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: few_nerd
type: few_nerd
args: supervised
metrics:
- name: Precision
type: precision
value: 0.7377633209417596
- name: Recall
type: recall
value: 0.7817648386368765
- name: F1
type: f1
value: 0.7591269959856158
- name: Accuracy
type: accuracy
value: 0.9383331648547562
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Cybonto-distilbert-base-uncased-finetuned-ner-v0.1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the few_nerd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1930
- Precision: 0.7378
- Recall: 0.7818
- F1: 0.7591
- Accuracy: 0.9383
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 36
- eval_batch_size: 36
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2001 | 1.0 | 3661 | 0.1954 | 0.7244 | 0.7750 | 0.7488 | 0.9360 |
| 0.1717 | 2.0 | 7322 | 0.1898 | 0.7392 | 0.7767 | 0.7575 | 0.9384 |
| 0.1485 | 3.0 | 10983 | 0.1930 | 0.7378 | 0.7818 | 0.7591 | 0.9383 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Garsic/DialoGPT-medium-pecorine | 520b2778b99b0d5f5bf22a3abba10a5092d99d13 | 2022-03-27T22:17:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Garsic | null | Garsic/DialoGPT-medium-pecorine | 1 | null | transformers | 30,998 | ---
tags:
- conversational
---
# Pecorine dialog model |
BigSalmon/InformalToFormalLincoln31 | d9760a095371a0fe8e5c31a13ec6a92b2082cc53 | 2022-03-28T00:48:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln31 | 1 | null | transformers | 30,999 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln31")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln31")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
``` |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.