modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-24 12:28:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 493
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-24 12:27:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mekondjo/distilbert-base-uncased-finetuned-emotion | mekondjo | 2022-04-12T15:53:40Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-12T15:39:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9248167911304236
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2219
- Accuracy: 0.9245
- F1: 0.9248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.848 | 1.0 | 250 | 0.3157 | 0.9075 | 0.9059 |
| 0.253 | 2.0 | 500 | 0.2219 | 0.9245 | 0.9248 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
surdan/LaBSE_ner_nerel | surdan | 2022-04-12T13:17:34Z | 1,192 | 10 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"ru",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-04-11T14:45:16Z | ---
language: ["ru", "en"]
tasks:
- token-classification
---
## About model
This model based on [cointegrated/LaBSE-en-ru](https://huggingface.co/cointegrated/LaBSE-en-ru).
And trained on [surdan/nerel_short](https://huggingface.co/datasets/surdan/nerel_short) dataset
You can find more info:
- How the model was trained [Train_model.ipynb](https://huggingface.co/surdan/LaBSE_ner_nerel/blob/main/Train_model.ipynb)
- Example of usage model [Inference.ipynb](https://huggingface.co/surdan/LaBSE_ner_nerel/blob/main/Inference.ipynb) |
luckydog/distilbert-base-uncased-finetuned-emotion | luckydog | 2022-04-12T12:36:17Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-12T02:41:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9
- name: F1
type: f1
value: 0.8980758869010411
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3298
- Accuracy: 0.9
- F1: 0.8981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.2761 | 1.0 | 250 | 0.6036 | 0.814 | 0.7881 |
| 0.4081 | 2.0 | 500 | 0.3298 | 0.9 | 0.8981 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
TovaHasi/Toyota_material_calculator_reach_trucks | TovaHasi | 2022-04-12T12:33:49Z | 0 | 0 | null | [
"license:unlicense",
"region:us"
] | null | 2022-04-12T12:22:50Z | ---
app_file: app.py
license: unlicense
---
|
lewtun/roberta-large-finetuned-clinc-123 | lewtun | 2022-04-12T12:05:51Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-12T12:00:35Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: roberta-large-finetuned-clinc-123
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.925483870967742
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-clinc-123
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7226
- Accuracy: 0.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 5.0576 | 1.0 | 120 | 5.0269 | 0.0068 |
| 4.5101 | 2.0 | 240 | 2.9324 | 0.7158 |
| 1.9757 | 3.0 | 360 | 0.7226 | 0.9255 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Chris1/real2sim | Chris1 | 2022-04-12T11:33:32Z | 0 | 0 | null | [
"pytorch",
"huggan",
"gan",
"license:mit",
"region:us"
] | null | 2022-04-12T11:33:27Z | ---
tags:
- huggan
- gan
# See a list of available tags here:
# https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L12
# task: unconditional-image-generation or conditional-image-generation or image-to-image
license: mit
---
# MyModelName
## Model description
Describe the model here (what it does, what it's used for, etc.)
## Intended uses & limitations
#### How to use
```python
# You can include sample code which will be formatted
```
#### Limitations and bias
Provide examples of latent issues and potential remediations.
## Training data
Describe the data you used to train the model.
If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data.
## Training procedure
Preprocessing, hardware used, hyperparameters...
## Eval results
## Generated Images
You can embed local or remote images using ``
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2020}
}
``` |
Chris1/ape2punk_epoch80 | Chris1 | 2022-04-12T11:21:48Z | 0 | 0 | null | [
"pytorch",
"huggan",
"gan",
"license:mit",
"region:us"
] | null | 2022-04-12T11:21:43Z | ---
tags:
- huggan
- gan
# See a list of available tags here:
# https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L12
# task: unconditional-image-generation or conditional-image-generation or image-to-image
license: mit
---
# MyModelName
## Model description
Describe the model here (what it does, what it's used for, etc.)
## Intended uses & limitations
#### How to use
```python
# You can include sample code which will be formatted
```
#### Limitations and bias
Provide examples of latent issues and potential remediations.
## Training data
Describe the data you used to train the model.
If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data.
## Training procedure
Preprocessing, hardware used, hyperparameters...
## Eval results
## Generated Images
You can embed local or remote images using ``
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2020}
}
``` |
Kuray107/ls-timit-wsj0-100percent-supervised-meta | Kuray107 | 2022-04-12T11:19:25Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-11T22:24:57Z | ---
tags:
- generated_from_trainer
model-index:
- name: ls-timit-wsj0-100percent-supervised-meta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ls-timit-wsj0-100percent-supervised-meta
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0531
- Wer: 0.0214
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1618 | 4.57 | 1000 | 0.0500 | 0.0432 |
| 0.0489 | 9.13 | 2000 | 0.0535 | 0.0291 |
| 0.0306 | 13.7 | 3000 | 0.0478 | 0.0275 |
| 0.0231 | 18.26 | 4000 | 0.0531 | 0.0214 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
|
Eterna2/LayoutParser | Eterna2 | 2022-04-12T08:58:12Z | 0 | 2 | null | [
"detectron2",
"layout_parser",
"license:apache-2.0",
"region:us"
] | null | 2022-04-12T08:13:51Z | ---
license: apache-2.0
tags:
- detectron2
- layout_parser
---
Model binaries downloaded from https://github.com/Layout-Parser/layout-parser/blob/c0044a08da7a630e2241348e597a08ba6aa87ba1/src/layoutparser/models/detectron2/catalog.py |
adache/tf-distilbert-base-uncased-finetuned-emotion | adache | 2022-04-12T08:20:01Z | 12 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-12T08:19:50Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: tf-distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tf-distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Tokenizers 0.11.6
|
nntadotzip/bert-base-cased-IUChatbot-ontologyDts-bertBaseCased-bertTokenizer-12April2022 | nntadotzip | 2022-04-12T08:14:00Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-04-12T07:53:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-IUChatbot-ontologyDts-bertBaseCased-bertTokenizer-12April2022
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-IUChatbot-ontologyDts-bertBaseCased-bertTokenizer-12April2022
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 357 | 0.4760 |
| 0.6305 | 2.0 | 714 | 0.3957 |
| 0.4345 | 3.0 | 1071 | 0.3856 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
adache/distilbert-base-uncased-finetuned-emotion | adache | 2022-04-12T07:48:16Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-12T05:43:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2270
- Accuracy: 0.9245
- F1: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8398 | 1.0 | 250 | 0.3276 | 0.9005 | 0.8966 |
| 0.2541 | 2.0 | 500 | 0.2270 | 0.9245 | 0.9249 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.6
|
ID56/FF-Vision-CIFAR | ID56 | 2022-04-12T07:06:43Z | 0 | 0 | null | [
"pytorch",
"image-classification",
"dataset:cifar10",
"license:cc-by-sa-4.0",
"region:us"
] | image-classification | 2022-04-06T22:02:53Z | ---
thumbnail: "https://huggingface.co/ID56/FF-Vision-CIFAR/resolve/main/assets/cover_image.png"
license: cc-by-sa-4.0
tags:
- image-classification
datasets:
- cifar10
metrics:
- accuracy
inference: false
---
# CIFAR-10 Upside Down Classifier
For the Fatima Fellowship 2022 Coding Challenge, DL for Vision track.
<a href="https://wandb.ai/dealer56/cifar-updown-classifier/reports/CIFAR-10-Upside-Down-Classifier-Fatima-Fellowship-2022-Coding-Challenge-Vision---VmlldzoxODA2MDE4" target="_parent"><img src="https://img.shields.io/badge/weights-%26biases-ffcf40" alt="W&B Report"/></a>
<img src="https://huggingface.co/ID56/FF-Vision-CIFAR/resolve/main/assets/cover_image.png" alt="Cover Image" width="800"/>
## Usage
### Model Definition
```python
from torch import nn
import timm
from huggingface_hub import PyTorchModelHubMixin
class UpDownEfficientNetB0(nn.Module, PyTorchModelHubMixin):
"""A simple Hub Mixin wrapper for timm EfficientNet-B0. Used to classify whether an image is upright or flipped down, on CIFAR-10."""
def __init__(self, **kwargs):
super().__init__()
self.base_model = timm.create_model('efficientnet_b0', num_classes=1, drop_rate=0.2, drop_path_rate=0.2)
self.config = kwargs.pop("config", None)
def forward(self, input):
return self.base_model(input)
```
### Loading the Model from Hub
```python
net = UpDownEfficientNetB0.from_pretrained("ID56/FF-Vision-CIFAR")
```
### Running Inference
```python
from torchvision import transforms
CIFAR_MEAN = (0.4914, 0.4822, 0.4465)
CIFAR_STD = (0.247, 0.243, 0.261)
transform = transforms.Compose([
transforms.Resize(40, 40),
transforms.ToTensor(),
transforms.Normalize(CIFAR_MEAN, CIFAR_STD)
])
image = load_some_image() # Load some PIL Image or uint8 HWC image array
image = transform(image) # Convert to CHW image tensor
image = image.unsqueeze(0) # Add batch dimension
net.eval()
pred = net(image)
``` |
tartuNLP/m2m100_418M_smugri | tartuNLP | 2022-04-12T06:38:16Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-04T11:24:56Z | ---
license: mit
language:
- en
widget:
- text: "Let us translate some text from Livonian to Võro!"
---
# NMT for Finno-Ugric Languages
This is an NMT system for translating between Võro, Livonian, North Sami, South Sami as well as Estonian, Finnish, Latvian and English. It was created by fine-tuning Facebook's m2m100-418M on the liv4ever and smugri datasets.
## Tokenizer
Four language codes were added to the tokenizer: __liv__, __vro__, __sma__ and __sme__. Currently the m2m100 tokenizer loads the list of languages from a hard-coded list, so it has to be updated after loading; see the code example below.
## Usage example
Install the transformers and sentencepiece libraries: `pip install sentencepiece transformers`
```from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("tartuNLP/m2m100_418M_smugri")
#Fix the language codes in the tokenizer
tokenizer.id_to_lang_token = dict(list(tokenizer.id_to_lang_token.items()) + list(tokenizer.added_tokens_decoder.items()))
tokenizer.lang_token_to_id = dict(list(tokenizer.lang_token_to_id.items()) + list(tokenizer.added_tokens_encoder.items()))
tokenizer.lang_code_to_token = { k.replace("_", ""): k for k in tokenizer.additional_special_tokens }
tokenizer.lang_code_to_id = { k.replace("_", ""): v for k, v in tokenizer.lang_token_to_id.items() }
model = AutoModelForSeq2SeqLM.from_pretrained("tartuNLP/m2m100_418M_smugri")
tokenizer.src_lang = 'liv'
encoded_src = tokenizer("Līvõ kēļ jelāb!", return_tensors="pt")
encoded_out = model.generate(**encoded_src, forced_bos_token_id = tokenizer.get_lang_id("sme"))
print(tokenizer.batch_decode(encoded_out, skip_special_tokens=True))
```
The output is `Livčča giella eallá.` |
gary109/wav2vec2-base-mirst500 | gary109 | 2022-04-12T05:52:24Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:mir_st500",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2022-04-11T06:13:13Z | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- mir_st500
metrics:
- accuracy
model-index:
- name: wav2vec2-base-mirst500
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-mirst500
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the /workspace/datasets/datasets/MIR_ST500/MIR_ST500_AUDIO_CLASSIFICATION.py dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8678
- Accuracy: 0.7017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 1
- seed: 0
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1999 | 1.0 | 1304 | 1.1029 | 0.5877 |
| 1.0779 | 2.0 | 2608 | 0.9455 | 0.6555 |
| 0.9775 | 3.0 | 3912 | 0.9670 | 0.6523 |
| 0.9542 | 4.0 | 5216 | 0.8810 | 0.6946 |
| 0.9403 | 5.0 | 6520 | 0.8678 | 0.7017 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1+cu102
- Datasets 2.0.0
- Tokenizers 0.10.3
|
kmasiak/FraudDetection | kmasiak | 2022-04-12T03:07:59Z | 0 | 0 | null | [
"region:us"
] | null | 2022-04-06T23:36:56Z | The files in this repository were used for detecting accounting fraud using VAE-GAN and other models. Here is a breakdown of the files:
20220409-21_35_52_ep_3_decoder_model.pth - Decoder I trained that has the best results.
20220409-21_35_52_ep_3_discriminator_model.pth - Discriminator I trained that has the best results.
20220409-21_35_52_ep_3_encoder_model.pth - Encoder I trained that has the best results.
Dataset.csv - The dataset used for train/testing, contains 9 features, 532909 regular, 70 global, and 30 local transactions.
Fraud_Detection_AutoML.ipynb - AutoSklearnClassifier (an implementation of automl) is used on the fraud detection dataset.
Fraud_Detection_Supervised.ipynb - KNN classifier is used on the fraud detection dataset.
Gradio_Demo.ipynb - Note this is just for demo purposes. The actual implementation of the VAE-GAN model is not used in the gradio demo due to time constraints.
SMOTE_VAE_GAN.ipynb - Use SMOTE to help mitigate the issue of an unbalanced dataset while training.
VAE_GAN_Test.ipynb - Evaluates a VAE-GAN model.
VAE_GAN_Train.ipynb - Trains the VAE-GAN model on the fraud detection dataset.
ep_100_decoder_model.pth - Pre-trained decoder from a previous paper I used to improve results.
ep_100_discriminator_model.pth - Pre-trained discriminator from a previous paper I used to improve results.
ep_100_encoder_model.pth - Pre-trained encoder from a previous paper I used to improve results.
Note: Credit for the above 3 files goes to Credit goes to Jie Dai, Chenjian Wang, and Shuoyi Wei. Accounting Fraud Detection with VAE-GAN, 2020. |
NoCaptain/DistilRoBERTa-C19-Vax-Fine-tuned | NoCaptain | 2022-04-12T00:34:14Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-10T00:51:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- accuracy
- f1
model-index:
- name: DistilRoberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilRoberta
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1246
- Precision: 0.9633
- Accuracy: 0.9697
- F1: 0.9705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:--------:|:------:|
| 0.5894 | 0.4 | 500 | 0.4710 | 0.8381 | 0.7747 | 0.7584 |
| 0.3863 | 0.8 | 1000 | 0.3000 | 0.8226 | 0.8737 | 0.8858 |
| 0.2272 | 1.2 | 1500 | 0.1973 | 0.9593 | 0.9333 | 0.9329 |
| 0.1639 | 1.6 | 2000 | 0.1694 | 0.9067 | 0.9367 | 0.9403 |
| 0.1263 | 2.0 | 2500 | 0.1128 | 0.9657 | 0.9597 | 0.9603 |
| 0.0753 | 2.4 | 3000 | 0.1305 | 0.9614 | 0.967 | 0.9679 |
| 0.0619 | 2.8 | 3500 | 0.1246 | 0.9633 | 0.9697 | 0.9705 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
modhp/wav2vec2-model2-torgo | modhp | 2022-04-11T23:31:56Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-08T19:47:36Z | ---
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-model2-torgo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-model2-torgo
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9975
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.1
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---:|
| 12.5453 | 0.76 | 500 | 14.6490 | 1.0 |
| 4.8036 | 1.53 | 1000 | 8.4523 | 1.0 |
| 5.0421 | 2.29 | 1500 | 5.4114 | 1.0 |
| 5.2055 | 3.05 | 2000 | 11.0507 | 1.0 |
| 4.6389 | 3.82 | 2500 | 4.6792 | 1.0 |
| 4.5523 | 4.58 | 3000 | 4.7855 | 1.0 |
| 4.7843 | 5.34 | 3500 | 11.2783 | 1.0 |
| 4.6066 | 6.11 | 4000 | 8.7807 | 1.0 |
| 4.7382 | 6.87 | 4500 | 2942.0220 | 1.0 |
| 130.5733 | 7.63 | 5000 | 5.8412 | 1.0 |
| 4.4972 | 8.4 | 5500 | 17.7038 | 1.0 |
| 4.5196 | 9.16 | 6000 | 11.4548 | 1.0 |
| 4.3198 | 9.92 | 6500 | 6.0885 | 1.0 |
| 4.4273 | 10.69 | 7000 | 6.7374 | 1.0 |
| 4.2783 | 11.45 | 7500 | 4.7276 | 1.0 |
| 4.2985 | 12.21 | 8000 | 6.1412 | 1.0 |
| 4.3262 | 12.98 | 8500 | 5.2621 | 1.0 |
| 4.1705 | 13.74 | 9000 | 5.2214 | 1.0 |
| 4.3176 | 14.5 | 9500 | 5.5359 | 1.0 |
| 3.9808 | 15.27 | 10000 | 4.1537 | 1.0 |
| 4.0228 | 16.03 | 10500 | 4.2962 | 1.0 |
| 4.0595 | 16.79 | 11000 | 7.6361 | 1.0 |
| 4.0088 | 17.56 | 11500 | 6.8715 | 1.0 |
| 3.8727 | 18.32 | 12000 | 8.8657 | 1.0 |
| 4.0073 | 19.08 | 12500 | 5.8170 | 1.0 |
| 3.8511 | 19.85 | 13000 | 13.9836 | 1.0 |
| 4.0899 | 20.61 | 13500 | 5.3287 | 1.0 |
| 3.8782 | 21.37 | 14000 | 8.0635 | 1.0 |
| 3.9235 | 22.14 | 14500 | 5.5129 | 1.0 |
| 3.7276 | 22.9 | 15000 | 5.0819 | 1.0 |
| 3.7908 | 23.66 | 15500 | 6.1458 | 1.0 |
| 3.9176 | 24.43 | 16000 | 4.6094 | 1.0 |
| 3.8477 | 25.19 | 16500 | 5.1406 | 1.0 |
| 3.6917 | 25.95 | 17000 | 4.5684 | 1.0 |
| 3.8568 | 26.72 | 17500 | 4.0306 | 1.0 |
| 3.7231 | 27.48 | 18000 | 5.6331 | 1.0 |
| 3.8145 | 28.24 | 18500 | 8.2997 | 1.0 |
| 3.7809 | 29.01 | 19000 | 5.7468 | 1.0 |
| 3.5995 | 29.77 | 19500 | 4.9975 | 1.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 1.18.3
- Tokenizers 0.11.6
|
huggingtweets/angrymemorys-oldandtoothless-sadboi666_-witheredstrings | huggingtweets | 2022-04-11T22:44:40Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-04-11T22:43:38Z | ---
language: en
thumbnail: http://www.huggingtweets.com/angrymemorys-oldandtoothless-sadboi666_-witheredstrings/1649717075201/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1506323689456947207/xBvvxyQr_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1511852580216967169/b1Aiv2t3_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/378800000610482331/8808c2f408b97fe3646f2dca86441506_400x400.jpeg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">makeouthill & VacuumF & Jason Hendricks & Angry Memories</div>
<div style="text-align: center; font-size: 14px;">@angrymemorys-oldandtoothless-sadboi666_-witheredstrings</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from makeouthill & VacuumF & Jason Hendricks & Angry Memories.
| Data | makeouthill | VacuumF | Jason Hendricks | Angry Memories |
| --- | --- | --- | --- | --- |
| Tweets downloaded | 321 | 425 | 3250 | 3199 |
| Retweets | 34 | 0 | 0 | 941 |
| Short tweets | 49 | 31 | 0 | 1110 |
| Tweets kept | 238 | 394 | 3250 | 1148 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2nh2rd94/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @angrymemorys-oldandtoothless-sadboi666_-witheredstrings's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/me7rzksi) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/me7rzksi/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/angrymemorys-oldandtoothless-sadboi666_-witheredstrings')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
adasnew/t5-small-xsum | adasnew | 2022-04-11T22:35:12Z | 18 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-11T18:45:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.8641 | 0.04 | 500 | 2.6202 |
| 2.7466 | 0.08 | 1000 | 2.5660 |
| 2.8767 | 0.12 | 1500 | 2.5319 |
| 2.7099 | 0.16 | 2000 | 2.5107 |
| 2.7752 | 0.2 | 2500 | 2.4922 |
| 2.6037 | 0.24 | 3000 | 2.4800 |
| 2.8236 | 0.27 | 3500 | 2.4677 |
| 2.7089 | 0.31 | 4000 | 2.4581 |
| 2.7299 | 0.35 | 4500 | 2.4498 |
| 2.7498 | 0.39 | 5000 | 2.4420 |
| 2.6186 | 0.43 | 5500 | 2.4346 |
| 2.7817 | 0.47 | 6000 | 2.4288 |
| 2.5559 | 0.51 | 6500 | 2.4239 |
| 2.6725 | 0.55 | 7000 | 2.4186 |
| 2.6316 | 0.59 | 7500 | 2.4149 |
| 2.5561 | 0.63 | 8000 | 2.4115 |
| 2.5708 | 0.67 | 8500 | 2.4097 |
| 2.5861 | 0.71 | 9000 | 2.4052 |
| 2.6363 | 0.74 | 9500 | 2.4024 |
| 2.7435 | 0.78 | 10000 | 2.4003 |
| 2.7258 | 0.82 | 10500 | 2.3992 |
| 2.6113 | 0.86 | 11000 | 2.3983 |
| 2.6006 | 0.9 | 11500 | 2.3972 |
| 2.5684 | 0.94 | 12000 | 2.3960 |
| 2.6181 | 0.98 | 12500 | 2.3953 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
tonyalves/ft-pt-br-local-2 | tonyalves | 2022-04-11T20:57:03Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-11T20:46:13Z | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
model-index:
- name: ft-pt-br-local-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft-pt-br-local-2
This model is a fine-tuned version of [tonyalves/output](https://huggingface.co/tonyalves/output) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.1+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
akoksal/bounti | akoksal | 2022-04-11T20:12:25Z | 304 | 6 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"sentiment",
"twitter",
"turkish",
"tr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-11T19:55:36Z | ---
language: "tr"
tags:
- sentiment
- twitter
- turkish
---
This Turkish Sentiment Analysis model is a fine-tuned checkpoint of pretrained [BERTurk model 128k uncased](https://huggingface.co/dbmdz/bert-base-turkish-128k-uncased) with [BounTi dataset](https://ieeexplore.ieee.org/document/9477814).
## Usage in Hugging Face Pipeline
```
from transformers import pipeline
bounti = pipeline("sentiment-analysis",model="akoksal/bounti")
print(bounti("Bu yemeği pek sevmedim"))
>> [{'label': 'negative', 'score': 0.8012508153915405}]
```
## Results
The scores of the finetuned model with BERTurk:
||Accuracy|Precision|Recall|F1|
|-------------|:---------:|:---------:|:------:|:-----:|
|Validation|0.745|0.706|0.730|0.715|
|Test|0.723|0.692|0.729|0.701|
## Dataset
You can find the dataset in [our Github repo](https://github.com/boun-tabi/BounTi-Turkish-Sentiment-Analysis) with the training, validation, and test splits.
Due to Twitter copyright, we cannot release the full text of the tweets. We share the tweet IDs, and the full text can be downloaded through official Twitter API.
| | Training | Validation | Test |
|----------|:--------:|:----------:|:----:|
| Positive | 1691 | 188 | 469 |
| Neutral | 3034 | 338 | 843 |
| Negative | 1008 | 113 | 280 |
| Total | 5733 | 639 | 1592 |
## Citation
You can cite the following paper if you use our work:
```
@INPROCEEDINGS{BounTi,
author={Köksal, Abdullatif and Özgür, Arzucan},
booktitle={2021 29th Signal Processing and Communications Applications Conference (SIU)},
title={Twitter Dataset and Evaluation of Transformers for Turkish Sentiment Analysis},
year={2021},
volume={},
number={}
}
```
---
|
Kuray107/ls-timit-100percent-supervised-meta | Kuray107 | 2022-04-11T19:44:56Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-11T14:57:43Z | ---
tags:
- generated_from_trainer
model-index:
- name: ls-timit-100percent-supervised-meta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ls-timit-100percent-supervised-meta
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0649
- Wer: 0.0253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0964 | 7.04 | 1000 | 0.0706 | 0.0342 |
| 0.0445 | 14.08 | 2000 | 0.0649 | 0.0253 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
|
nateraw/fastai-dummy-learner | nateraw | 2022-04-11T19:23:08Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2022-04-11T19:15:53Z | ---
tags:
- fastai
---
# Amazing!
Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (template below and [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using the 🤗Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join our fastai community on the Hugging Face Discord!
Greetings fellow fastlearner 🤝!
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
jegormeister/robbert-v2-dutch-base-mqa-finetuned | jegormeister | 2022-04-11T19:09:29Z | 1,058 | 4 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"robbert",
"nl",
"dataset:clips/mqa",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-04-11T13:40:02Z | ---
language: nl
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- robbert
datasets:
- clips/mqa
---
# jegorkitskerkin/robbert-v2-dutch-base-mqa-finetuned
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base). It was fine-tuned on 1,000,000 rows of Dutch FAQ question-answer pairs from [clips/mqa](https://huggingface.co/datasets/clips/mqa).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jegorkitskerkin/robbert-v2-dutch-base-mqa-finetuned')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jegorkitskerkin/robbert-v2-dutch-base-mqa-finetuned')
model = AutoModel.from_pretrained('jegorkitskerkin/robbert-v2-dutch-base-mqa-finetuned')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 12500 with parameters:
```
{'batch_size': 80, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
TrabajoAprendizajeProfundo/Trabajo | TrabajoAprendizajeProfundo | 2022-04-11T17:27:15Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"deep-reinforcement-learning",
"reinforcement-learning",
"region:us"
] | reinforcement-learning | 2022-04-09T11:48:09Z | ---
tags:
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
---
# TODO: Fill this model card
This is a pre-trained model of agent playing Asteroids-v0 using the [stable-baselines3](https://github.com/DLR-RM/stable-baselines3) library.
### Usage (with Stable-baselines3)
Using this model becomes easy when you have stable-baselines3 and huggingface_sb3 installed:
```
pip install stable-baselines3
pip install huggingface_sb3
```
Then, you can use the model like this:
```python
import gym
from huggingface_sb3 import load_from_hub
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
# Retrieve the model from the hub
## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name})
## filename = name of the model zip file from the repository
checkpoint = load_from_hub(repo_id="TrabajoAprendizajeProfundo/Trabajo", filename="Asteroids-v0.zip")
model = PPO.load(checkpoint)
# Evaluate the agent
eval_env = gym.make('Asteroids-v0')
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
# Watch the agent play
directory = './video'
env = Recorder(env, directory)
obs = env.reset()
done = False
while not done:
action, _state = model2.predict(obs)
obs, reward, done, info = env.step(action)
env.play()
```
### Evaluation Results
mean_reward, std_reward = evaluate_policy(model2, eval_env, n_eval_episodes=10)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}") |
tbosse/bert-base-german-cased-finetuned-subj_v5_11Epoch | tbosse | 2022-04-11T17:08:55Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-04-11T15:51:15Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-subj_v5_11Epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-subj_v5_11Epoch
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3467
- Precision: 0.8240
- Recall: 0.8287
- F1: 0.8263
- Accuracy: 0.9198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 32 | 0.3485 | 0.6992 | 0.7051 | 0.7021 | 0.8639 |
| No log | 2.0 | 64 | 0.2679 | 0.7947 | 0.7612 | 0.7776 | 0.8994 |
| No log | 3.0 | 96 | 0.2555 | 0.8073 | 0.8118 | 0.8095 | 0.9112 |
| No log | 4.0 | 128 | 0.2591 | 0.8290 | 0.8034 | 0.8160 | 0.9132 |
| No log | 5.0 | 160 | 0.2808 | 0.8450 | 0.8118 | 0.8281 | 0.9158 |
| No log | 6.0 | 192 | 0.2953 | 0.8386 | 0.8174 | 0.8279 | 0.9172 |
| No log | 7.0 | 224 | 0.3164 | 0.8347 | 0.8371 | 0.8359 | 0.9204 |
| No log | 8.0 | 256 | 0.3267 | 0.8329 | 0.8258 | 0.8293 | 0.9178 |
| No log | 9.0 | 288 | 0.3373 | 0.8268 | 0.8315 | 0.8291 | 0.9198 |
| No log | 10.0 | 320 | 0.3450 | 0.8324 | 0.8230 | 0.8277 | 0.9211 |
| No log | 11.0 | 352 | 0.3467 | 0.8240 | 0.8287 | 0.8263 | 0.9198 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Giyaseddin/distilbert-base-uncased-finetuned-short-answer-assessment | Giyaseddin | 2022-04-11T15:17:08Z | 6 | 1 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-11T09:03:04Z | ---
license: apache-2.0
language: en
library: transformers
other: distilbert
datasets:
- Short Question Answer Assessment Dataset
---
# DistilBERT base uncased model for Short Question Answer Assessment
## Model description
DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
process to generate inputs and labels from those texts using the BERT base model.
This is a classification model that solves Short Question Answer Assessment task, finetuned [pretrained DistilBERT model](https://huggingface.co/distilbert-base-uncased) on
[Question Answer Assessment dataset](#)
## Intended uses & limitations
This can only be used for the kind of questions and answers provided by that are similar to the ones in the dataset of [Banjade et al.](https://aclanthology.org/W16-0520.pdf).
### How to use
You can use this model directly with a :
```python
>>> from transformers import pipeline
>>> classifier = pipeline("text-classification", model="Giyaseddin/distilbert-base-uncased-finetuned-short-answer-assessment", return_all_scores=True)
>>> context = "To rescue a child who has fallen down a well, rescue workers fasten him to a rope, the other end of which is then reeled in by a machine. The rope pulls the child straight upward at steady speed."
>>> question = "How does the amount of tension in the rope compare to the downward force of gravity acting on the child?"
>>> ref_answer = "Since the child is being raised straight upward at a constant speed, the net force on the child is zero and all the forces balance. That means that the tension in the rope balances the downward force of gravity."
>>> student_answer = "The tension force is higher than the force of gravity."
>>>
>>> body = " [SEP] ".join([context, question, ref_answer, student_answer])
>>> raw_results = classifier([body])
>>> raw_results
[[{'label': 'LABEL_0', 'score': 0.0004029414849355817},
{'label': 'LABEL_1', 'score': 0.0005476847873069346},
{'label': 'LABEL_2', 'score': 0.998059093952179},
{'label': 'LABEL_3', 'score': 0.0009902542224153876}]]
>>> _LABELS_ID2NAME = {0: "correct", 1: "correct_but_incomplete", 2: "contradictory", 3: "incorrect"}
>>> results = []
>>> for result in raw_results:
for score in result:
results.append([
{_LABELS_ID2NAME[int(score["label"][-1:])]: "%.2f" % score["score"]}
])
>>> results
[[{'correct': '0.00'}],
[{'correct_but_incomplete': '0.00'}],
[{'contradictory': '1.00'}],
[{'incorrect': '0.00'}]]
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. It also inherits some of
[the bias of its teacher model](https://huggingface.co/bert-base-uncased#limitations-and-bias).
This bias will also affect all fine-tuned versions of this model.
Also one of the limiations of this model is the length, longer sequences would lead to wrong predictions, due to the pre-processing phase (after concatentating the input sequences, the important student answer might be pruned!)
## Pre-training data
DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset
consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia)
(excluding lists, tables and headers).
## Fine-tuning data
The annotated dataset consists of 900 students’ short constructed answers and their correctness in the given context. Four qualitative levels of correctness are defined, correct, correct-but-incomplete, contradictory and Incorrect.
## Training procedure
### Preprocessing
In the preprocessing phase, the following parts are concatenated: _question context_, _question_, _reference_answer_, and _student_answer_ using the separator `[SEP]`.
This makes the full text as:
```
[CLS] Context Sentence [SEP] Question Sentence [SEP] Reference Answer Sentence [SEP] Student Answer Sentence [CLS]
```
The data are splitted according to the following ratio:
- Training set 80%.
- Test set 20%.
Lables are mapped as: `{0: "correct", 1: "correct_but_incomplete", 2: "contradictory", 3: "incorrect"}`
### Fine-tuning
The model was finetuned on GeForce GTX 960M for 20 minuts. The parameters are:
| Parameter | Value |
|:-------------------:|:-----:|
| Learning rate | 5e-5 |
| Weight decay | 0.01 |
| Training batch size | 8 |
| Epochs | 4 |
Here is the scores during the training:
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
|:----------:|:-------------:|:-----------------:|:----------:|:---------:|:----------:|:--------:|
| 1 | No log | 0.665765 | 0.755330 | 0.743574 | 0.781210 | 0.755330 |
| 2 | 0.932100 | 0.362124 | 0.890355 | 0.889875 | 0.891407 | 0.890355 |
| 3 | 0.364900 | 0.226225 | 0.942132 | 0.941802 | 0.942458 | 0.942132 |
| 3 | 0.176900 | 0.193660 | 0.954315 | 0.954175 | 0.954985 | 0.954315 |
## Evaluation results
When fine-tuned on downstream task of Question Answer Assessment, 4 class classification, this model achieved the following results:
(scores are rounded to 2 floating points)
| | precision | recall | f1-score | support |
|:------------------------:|:----------:|:-------:|:--------:|:-------:|
| _correct_ | 0.938 | 0.989 | 0.963 | 366 |
| _correct_but_incomplete_ | 0.975 | 0.922 | 0.948 | 257 |
| _contradictory_ | 0.946 | 0.938 | 0.942 | 113 |
| _incorrect_ | 0.963 | 0.944 | 0.953 | 249 |
| accuracy | - | - | 0.954 | 985 |
| macro avg | 0.956 | 0.948 | 0.952 | 985 |
| weighted avg | 0.955 | 0.954 | 0.954 | 985 |
Confusion matrix:
| Actual \ Predicted | _correct_ | _correct_but_incomplete_ | _contradictory_ | _incorrect_ |
|:------------------------:|:---------:|:------------------------:|:---------------:|:-----------:|
| _correct_ | 362 | 4 | 0 | 0 |
| _correct_but_incomplete_ | 13 | 237 | 0 | 7 |
| _contradictory_ | 4 | 1 | 106 | 2 |
| _incorrect_ | 7 | 1 | 6 | 235 |
The AUC score is: 'micro'= **0.9695** and 'macro': **0.9659**
|
optimum/neuron-MiniLMv2-L12-H384-distilled-finetuned-clinc | optimum | 2022-04-11T13:44:33Z | 5 | 1 | transformers | [
"transformers",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-11T13:38:35Z | ---
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: MiniLMv2-L12-H384-distilled-from-RoBERTa-Large-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.94
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Neuron conversation
# MiniLMv2-L12-H384-distilled-from-RoBERTa-Large-distilled-clinc
This model is a fine-tuned version of [nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.9389999
## Deploy/use Model
If you want to use this model checkout the following notenbook: [sagemaker/18_inferentia_inference](https://github.com/huggingface/notebooks/blob/main/sagemaker/18_inferentia_inference/sagemaker-notebook.ipynb)
```python
from sagemaker.huggingface.model import HuggingFaceModel
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
model_data=s3_model_uri, # path to your model and script
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.12", # transformers version used
pytorch_version="1.9", # pytorch version used
py_version='py37', # python version used
)
# Let SageMaker know that we've already compiled the model via neuron-cc
huggingface_model._is_compiled_model = True
# deploy the endpoint endpoint
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type="ml.inf1.xlarge" # AWS Inferentia Instance
)
``` |
aleksavega/t5-efficient-base-finetuned-1.2 | aleksavega | 2022-04-11T12:04:08Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2022-04-11T09:53:00Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-efficient-base-finetuned-1.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-efficient-base-finetuned-1.2
This model is a fine-tuned version of [google/t5-efficient-base](https://huggingface.co/google/t5-efficient-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5294
- Rouge1: 62.691
- Rouge2: 55.9731
- Rougel: 60.9097
- Rougelsum: 61.4393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 4662
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.2424 | 1.0 | 1217 | 1.7042 | 34.2215 | 24.2754 | 31.7289 | 32.4237 |
| 1.7716 | 2.0 | 2434 | 1.6184 | 43.4774 | 34.0476 | 41.3691 | 41.9132 |
| 1.6324 | 3.0 | 3651 | 1.5811 | 49.1441 | 40.7935 | 47.0077 | 47.6388 |
| 1.5226 | 4.0 | 4868 | 1.5243 | 54.4769 | 46.3387 | 52.3289 | 52.9555 |
| 1.4121 | 5.0 | 6085 | 1.5040 | 56.8792 | 49.1963 | 54.7327 | 55.2805 |
| 1.331 | 6.0 | 7302 | 1.4930 | 58.6896 | 51.1683 | 56.7096 | 57.3605 |
| 1.2677 | 7.0 | 8519 | 1.4785 | 59.9285 | 52.4631 | 57.8575 | 58.4203 |
| 1.2175 | 8.0 | 9736 | 1.4839 | 60.0299 | 52.8806 | 58.0099 | 58.6348 |
| 1.1782 | 9.0 | 10953 | 1.4908 | 61.247 | 54.0887 | 59.2175 | 59.7658 |
| 1.1442 | 10.0 | 12170 | 1.4882 | 61.9895 | 54.9455 | 60.0728 | 60.5786 |
| 1.1118 | 11.0 | 13387 | 1.5061 | 62.1077 | 55.1276 | 60.2218 | 60.7475 |
| 1.081 | 12.0 | 14604 | 1.5078 | 61.6083 | 54.6805 | 59.7912 | 60.2489 |
| 1.0668 | 13.0 | 15821 | 1.5200 | 62.3075 | 55.5201 | 60.5192 | 60.9557 |
| 1.0488 | 14.0 | 17038 | 1.5344 | 62.5144 | 55.6332 | 60.6845 | 61.1715 |
| 1.0324 | 15.0 | 18255 | 1.5313 | 62.7697 | 56.0313 | 60.9298 | 61.4739 |
| 1.0302 | 16.0 | 19472 | 1.5294 | 62.691 | 55.9731 | 60.9097 | 61.4393 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
optimum/MiniLMv2-L12-H384-distilled-finetuned-clinc | optimum | 2022-04-11T11:21:21Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-11T11:18:49Z | ---
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: MiniLMv2-L12-H384-distilled-from-RoBERTa-Large-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.94
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L12-H384-distilled-from-RoBERTa-Large-distilled-clinc
This model is a fine-tuned version of [nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3479
- Accuracy: 0.94
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 60 | 0.8171 | 0.2490 |
| No log | 2.0 | 120 | 0.7039 | 0.6568 |
| No log | 3.0 | 180 | 0.6067 | 0.7932 |
| 0.7269 | 4.0 | 240 | 0.5270 | 0.8674 |
| 0.7269 | 5.0 | 300 | 0.4659 | 0.9010 |
| 0.7269 | 6.0 | 360 | 0.4201 | 0.9194 |
| 0.7269 | 7.0 | 420 | 0.3867 | 0.9352 |
| 0.4426 | 8.0 | 480 | 0.3649 | 0.9352 |
| 0.4426 | 9.0 | 540 | 0.3520 | 0.9403 |
| 0.4426 | 10.0 | 600 | 0.3479 | 0.94 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
vocab-transformers/distilbert-tokenizer_256k-MLM_best | vocab-transformers | 2022-04-11T11:16:06Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-04-11T11:14:12Z | # DistilBERT with 256k token embeddings
This model was initialized with a word2vec token embedding matrix with 256k entries, but these token embeddings were updated during MLM. The word2vec was trained on 100GB data from C4, MSMARCO, News, Wikipedia, S2ORC, for 3 epochs.
Then the model was trained on this dataset with MLM for 1.55M steps (batch size 64). The token embeddings were updated during MLM.
|
vocab-transformers/distilbert-word2vec_256k-MLM_best | vocab-transformers | 2022-04-11T11:13:13Z | 27 | 4 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-04-11T11:10:05Z | # DistilBERT with word2vec token embeddings
This model has a word2vec token embedding matrix with 256k entries. The word2vec was trained on 100GB data from C4, MSMARCO, News, Wikipedia, S2ORC, for 3 epochs.
Then the model was trained on this dataset with MLM for 1.37M steps (batch size 64). The token embeddings were NOT updated.
For the initial word2vec weights with Gensim see: [https://huggingface.co/vocab-transformers/distilbert-word2vec_256k-MLM_1M/tree/main/word2vec](https://huggingface.co/vocab-transformers/distilbert-word2vec_256k-MLM_1M/tree/main/word2vec)
|
Chikashi/t5-small-finetuned-wikihow_3epoch_b8_lr3e-3 | Chikashi | 2022-04-11T08:17:07Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wikihow",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-10T23:51:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikihow
metrics:
- rouge
model-index:
- name: t5-small-finetuned-wikihow_3epoch_b8_lr3e-3
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wikihow
type: wikihow
args: all
metrics:
- name: Rouge1
type: rouge
value: 27.1711
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikihow_3epoch_b8_lr3e-3
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikihow dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3163
- Rouge1: 27.1711
- Rouge2: 10.6296
- Rougel: 23.206
- Rougelsum: 26.4801
- Gen Len: 18.5433
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.0734 | 0.25 | 5000 | 2.7884 | 22.4825 | 7.2492 | 19.243 | 21.9167 | 18.0616 |
| 2.9201 | 0.51 | 10000 | 2.7089 | 24.0869 | 8.0348 | 20.4814 | 23.4541 | 18.5994 |
| 2.8403 | 0.76 | 15000 | 2.6390 | 24.62 | 8.3776 | 20.8736 | 23.9784 | 18.4676 |
| 2.7764 | 1.02 | 20000 | 2.5943 | 24.1504 | 8.3933 | 20.8271 | 23.5382 | 18.4078 |
| 2.6641 | 1.27 | 25000 | 2.5428 | 25.6574 | 9.2371 | 21.8576 | 24.9558 | 18.4249 |
| 2.6369 | 1.53 | 30000 | 2.5042 | 25.5208 | 9.254 | 21.6673 | 24.8589 | 18.6467 |
| 2.6 | 1.78 | 35000 | 2.4637 | 26.094 | 9.7003 | 22.3097 | 25.4695 | 18.5065 |
| 2.5562 | 2.03 | 40000 | 2.4285 | 26.5374 | 9.9222 | 22.5291 | 25.8836 | 18.5553 |
| 2.4322 | 2.29 | 45000 | 2.3858 | 26.939 | 10.3555 | 23.0211 | 26.2834 | 18.5614 |
| 2.4106 | 2.54 | 50000 | 2.3537 | 26.7423 | 10.2816 | 22.7986 | 26.083 | 18.5792 |
| 2.3731 | 2.8 | 55000 | 2.3163 | 27.1711 | 10.6296 | 23.206 | 26.4801 | 18.5433 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggan/pix2pix-facades-demo | huggan | 2022-04-11T08:09:26Z | 0 | 0 | null | [
"pytorch",
"huggan",
"gan",
"license:mit",
"region:us"
] | null | 2022-04-09T13:16:10Z | ---
tags:
- huggan
- gan
# See a list of available tags here:
# https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L12
# task: unconditional-image-generation or conditional-image-generation or image-to-image
license: mit
---
This was run from this implementation: https://github.com/NielsRogge/community-events-1/blob/improve_pix2pix/huggan/pytorch/pix2pix/train.py
The command to run was:
```bash
accelerate launch train.py --checkpoint_interval 1 --push_to_hub --output_dir pix2pix-facades --hub_model_id huggan/pix2pix-facades-demo --wandb
``` |
Mandela/DialoGPT-small-DEADPOOL | Mandela | 2022-04-10T23:43:06Z | 0 | 0 | null | [
"conversation",
"region:us"
] | null | 2022-04-10T16:49:29Z | ---
language:
- python
tags:
- conversation
--- |
Chikashi/t5-small-finetuned-wikihow_3epoch_b4_lr3e-5 | Chikashi | 2022-04-10T23:42:14Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wikihow",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-09T19:16:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikihow
metrics:
- rouge
model-index:
- name: t5-small-finetuned-wikihow_3epoch_b4_lr3e-5
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wikihow
type: wikihow
args: all
metrics:
- name: Rouge1
type: rouge
value: 26.1071
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikihow_3epoch_b4_lr3e-5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikihow dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4351
- Rouge1: 26.1071
- Rouge2: 9.3627
- Rougel: 22.0825
- Rougelsum: 25.4514
- Gen Len: 18.474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.9216 | 0.13 | 5000 | 2.6385 | 23.8039 | 7.8863 | 20.0109 | 23.0802 | 18.3481 |
| 2.8158 | 0.25 | 10000 | 2.5884 | 24.2567 | 8.2003 | 20.438 | 23.5325 | 18.3833 |
| 2.7743 | 0.38 | 15000 | 2.5623 | 24.8471 | 8.3768 | 20.8711 | 24.1114 | 18.2901 |
| 2.7598 | 0.51 | 20000 | 2.5368 | 25.1566 | 8.6721 | 21.1896 | 24.4558 | 18.3561 |
| 2.7192 | 0.64 | 25000 | 2.5220 | 25.3477 | 8.8106 | 21.3799 | 24.6742 | 18.3108 |
| 2.7207 | 0.76 | 30000 | 2.5114 | 25.5912 | 8.998 | 21.5508 | 24.9344 | 18.3445 |
| 2.7041 | 0.89 | 35000 | 2.4993 | 25.457 | 8.8644 | 21.4516 | 24.7965 | 18.4354 |
| 2.687 | 1.02 | 40000 | 2.4879 | 25.5886 | 8.9766 | 21.6794 | 24.9512 | 18.4035 |
| 2.6652 | 1.14 | 45000 | 2.4848 | 25.7367 | 9.078 | 21.7096 | 25.0924 | 18.4328 |
| 2.6536 | 1.27 | 50000 | 2.4761 | 25.7368 | 9.1609 | 21.729 | 25.0866 | 18.3117 |
| 2.6589 | 1.4 | 55000 | 2.4702 | 25.7738 | 9.1413 | 21.7492 | 25.114 | 18.4862 |
| 2.6384 | 1.53 | 60000 | 2.4620 | 25.7433 | 9.1356 | 21.8198 | 25.0896 | 18.489 |
| 2.6337 | 1.65 | 65000 | 2.4595 | 26.0919 | 9.2605 | 21.9447 | 25.4065 | 18.4083 |
| 2.6375 | 1.78 | 70000 | 2.4557 | 26.0912 | 9.3469 | 22.0182 | 25.4428 | 18.4133 |
| 2.6441 | 1.91 | 75000 | 2.4502 | 26.1366 | 9.3143 | 22.058 | 25.4673 | 18.4972 |
| 2.6276 | 2.03 | 80000 | 2.4478 | 25.9929 | 9.2464 | 21.9271 | 25.3263 | 18.469 |
| 2.6062 | 2.16 | 85000 | 2.4467 | 26.0465 | 9.3166 | 22.0342 | 25.3998 | 18.3777 |
| 2.6126 | 2.29 | 90000 | 2.4407 | 26.1953 | 9.3848 | 22.1148 | 25.5161 | 18.467 |
| 2.6182 | 2.42 | 95000 | 2.4397 | 26.1331 | 9.3626 | 22.1076 | 25.4627 | 18.4413 |
| 2.6041 | 2.54 | 100000 | 2.4375 | 26.1301 | 9.3567 | 22.0869 | 25.465 | 18.4929 |
| 2.5996 | 2.67 | 105000 | 2.4367 | 26.0956 | 9.3314 | 22.063 | 25.4242 | 18.5074 |
| 2.6144 | 2.8 | 110000 | 2.4355 | 26.1764 | 9.4157 | 22.1231 | 25.5175 | 18.4729 |
| 2.608 | 2.93 | 115000 | 2.4351 | 26.1071 | 9.3627 | 22.0825 | 25.4514 | 18.474 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/s_m_frank | huggingtweets | 2022-04-10T22:28:09Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-04-10T22:27:04Z | ---
language: en
thumbnail: http://www.huggingtweets.com/s_m_frank/1649629685555/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1480658144833515525/DS0AOK_d_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">cute junco observer</div>
<div style="text-align: center; font-size: 14px;">@s_m_frank</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from cute junco observer.
| Data | cute junco observer |
| --- | --- |
| Tweets downloaded | 1253 |
| Retweets | 482 |
| Short tweets | 184 |
| Tweets kept | 587 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2s2slp94/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @s_m_frank's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2bjkzwlr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2bjkzwlr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/s_m_frank')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
tonyalves/local_dataset | tonyalves | 2022-04-10T22:23:55Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"pt",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-10T22:05:01Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
model-index:
- name: local_dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# local_dataset
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PT dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.9.1+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/nordicshrew | huggingtweets | 2022-04-10T22:04:13Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-04-10T22:02:07Z | ---
language: en
thumbnail: http://www.huggingtweets.com/nordicshrew/1649628249290/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1129935220260704256/RSmw3S0E_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">guelph’s finest poster</div>
<div style="text-align: center; font-size: 14px;">@nordicshrew</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from guelph’s finest poster.
| Data | guelph’s finest poster |
| --- | --- |
| Tweets downloaded | 3219 |
| Retweets | 429 |
| Short tweets | 145 |
| Tweets kept | 2645 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ywrep7o1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nordicshrew's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1jti1kl9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1jti1kl9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nordicshrew')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
keerthisaran/distilbert-base-uncased-finetuned-emotion | keerthisaran | 2022-04-10T21:58:34Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-14T18:45:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.92
- name: F1
type: f1
value: 0.920435758296201
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2183
- Accuracy: 0.92
- F1: 0.9204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8464 | 1.0 | 250 | 0.3125 | 0.9085 | 0.9061 |
| 0.2476 | 2.0 | 500 | 0.2183 | 0.92 | 0.9204 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Ayobami/UpsideDownDetector | Ayobami | 2022-04-10T20:42:37Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2022-04-10T06:26:56Z | ---
license: mit
---
An image rotation detector trained to detect if an image is upside down or not
|
BigSalmon/GPTNeo1.3BPointsLincolnFormalInformal | BigSalmon | 2022-04-10T20:04:26Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-04-04T05:04:06Z | It works worse than the GPT-2 Large & Medium models I have been training, because I don't have the compute needed to train the entire dataset I have. I had to resort to using bits.
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPTNeo1.3BPointsLincolnFormalInformal")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPTNeo1.3BPointsLincolnFormalInformal")
```
```
- moviepass to return
- this summer
- swooped up by
- original co-founder stacy spikes
text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes.
***
- middle schools do not have recess
- should get back to doing it
- amazing for communication
- and getting kids to move around
text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity.
***
-
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
Points and keywords. Informal to formal. |
huggingtweets/graveyard_plots-hel_ql-witheredstrings | huggingtweets | 2022-04-10T19:16:31Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-04-10T19:15:53Z | ---
language: en
thumbnail: http://www.huggingtweets.com/graveyard_plots-hel_ql-witheredstrings/1649618186549/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1511852580216967169/b1Aiv2t3_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1457045233783701504/fnjAg6lH_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1332861091119046661/7ZD3Nqqg_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">GHANEM & Anthropos & darth hattie</div>
<div style="text-align: center; font-size: 14px;">@graveyard_plots-hel_ql-witheredstrings</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from GHANEM & Anthropos & darth hattie.
| Data | GHANEM | Anthropos | darth hattie |
| --- | --- | --- | --- |
| Tweets downloaded | 413 | 1175 | 1288 |
| Retweets | 1 | 354 | 9 |
| Short tweets | 18 | 92 | 146 |
| Tweets kept | 394 | 729 | 1133 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/26q7h6ze/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @graveyard_plots-hel_ql-witheredstrings's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3vrvcbh4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3vrvcbh4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/graveyard_plots-hel_ql-witheredstrings')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nepp1d0/SingleBertSmilesTargetInteraction | nepp1d0 | 2022-04-10T18:55:03Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-09T14:05:51Z | Prot_bert finetuned on GPCR_train dataset of Drug Target prediction
Trainig paramenters:
overwrite_output_dir=True,
evaluation_strategy="epoch",
learning_rate=1e-3,
weight_decay=0.001,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
push_to_hub=True,
fp16=True,
logging_steps=logging_steps,
save_strategy='epoch',
num_train_epochs=2 |
danhsf/xlm-roberta-base-finetuned-panx-de-fr | danhsf | 2022-04-10T18:21:26Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-04-10T18:01:12Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1667
- F1: 0.8582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2885 | 1.0 | 715 | 0.1817 | 0.8287 |
| 0.1497 | 2.0 | 1430 | 0.1618 | 0.8442 |
| 0.0944 | 3.0 | 2145 | 0.1667 | 0.8582 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
V3RX2000/xlm-roberta-base-finetuned-panx-it | V3RX2000 | 2022-04-10T15:46:48Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-04-10T13:26:51Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.822805578342904
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2323
- F1: 0.8228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8126 | 1.0 | 70 | 0.3361 | 0.7231 |
| 0.2995 | 2.0 | 140 | 0.2526 | 0.8079 |
| 0.1865 | 3.0 | 210 | 0.2323 | 0.8228 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
V3RX2000/xlm-roberta-base-finetuned-panx-fr | V3RX2000 | 2022-04-10T15:39:32Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-04-10T13:08:53Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8354854938789199
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2651
- F1: 0.8355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5954 | 1.0 | 191 | 0.3346 | 0.7975 |
| 0.2689 | 2.0 | 382 | 0.2900 | 0.8347 |
| 0.1821 | 3.0 | 573 | 0.2651 | 0.8355 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
abdusah/ft-tatoeba-ar-en | abdusah | 2022-04-10T15:34:36Z | 15 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:open_subtitles",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-04-08T11:49:17Z | ---
tags:
- translation
- generated_from_trainer
datasets:
- open_subtitles
model-index:
- name: ft-tatoeba-ar-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft-tatoeba-ar-en
This model was trained from scratch on the open_subtitles dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
V3RX2000/xlm-roberta-base-finetuned-panx-de-fr | V3RX2000 | 2022-04-10T15:31:08Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-04-10T12:45:09Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1667
- F1: 0.8582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2885 | 1.0 | 715 | 0.1817 | 0.8287 |
| 0.1497 | 2.0 | 1430 | 0.1618 | 0.8442 |
| 0.0944 | 3.0 | 2145 | 0.1667 | 0.8582 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
V3RX2000/xlm-roberta-base-finetuned-panx-de | V3RX2000 | 2022-04-10T15:13:04Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-04-10T10:46:21Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8590909090909091
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1380
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2642 | 1.0 | 525 | 0.1624 | 0.8251 |
| 0.1315 | 2.0 | 1050 | 0.1445 | 0.8508 |
| 0.0832 | 3.0 | 1575 | 0.1380 | 0.8591 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
brad1141/baseline_gptv1 | brad1141 | 2022-04-10T13:25:55Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-04-10T13:18:14Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: baseline_gptv1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baseline_gptv1
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
brad1141/baseline_bertv3 | brad1141 | 2022-04-10T13:16:14Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-04-10T13:09:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: baseline_bertv3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baseline_bertv3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
laampt/distilbert-base-uncased-finetuned-squad | laampt | 2022-04-10T13:15:06Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-04-10T13:05:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
brad1141/baseline_longformerv1 | brad1141 | 2022-04-10T13:01:30Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"longformer",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-04-10T12:37:35Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: baseline_longformerv1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baseline_longformerv1
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7596
- Precision: 0.1333
- Recall: 0.15
- F1: 0.1400
- Accuracy: 0.1400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.8469 | 0.89 | 1 | 1.7596 | 0.1333 | 0.15 | 0.1400 | 0.1400 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
V3RX2000/distilbert-base-uncased-finetuned-emotion | V3RX2000 | 2022-04-10T12:32:05Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-10T12:24:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9247142990809298
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2285
- Accuracy: 0.9245
- F1: 0.9247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8812 | 1.0 | 250 | 0.3301 | 0.906 | 0.9035 |
| 0.2547 | 2.0 | 500 | 0.2285 | 0.9245 | 0.9247 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
malcolm/TSC_SentimentA_IMDBAmznTSC_2 | malcolm | 2022-04-10T09:43:32Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-10T07:59:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: TSC_SentimentA_IMDBAmznTSC_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TSC_SentimentA_IMDBAmznTSC_2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1985
- Accuracy: 0.9365
- F1: 0.9373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Pavithra/codeparrot-ds-sample-gpt-small-10epoch | Pavithra | 2022-04-10T07:49:47Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-04-08T17:43:51Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds-sample-gpt-small-10epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds-sample-gpt-small-10epoch
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.29 | 0.94 | 1000 | 2.8452 |
| 2.3155 | 1.88 | 2000 | 2.3659 |
| 1.8817 | 2.82 | 3000 | 2.2085 |
| 1.6245 | 3.77 | 4000 | 2.1260 |
| 1.4314 | 4.71 | 5000 | 2.0705 |
| 1.2698 | 5.65 | 6000 | 2.0603 |
| 1.1281 | 6.59 | 7000 | 2.0599 |
| 1.0108 | 7.53 | 8000 | 2.0769 |
| 0.9167 | 8.47 | 9000 | 2.0870 |
| 0.8551 | 9.42 | 10000 | 2.0943 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
jaeyeon/wav2vec2-child-en-tokenizer-4 | jaeyeon | 2022-04-10T05:28:49Z | 6 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-08T07:33:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-child-en-tokenizer-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-child-en-tokenizer-4
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4709
- Wer: 0.3769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0334 | 1.72 | 100 | 1.4709 | 0.3769 |
| 0.0332 | 3.45 | 200 | 1.4709 | 0.3769 |
| 0.0343 | 5.17 | 300 | 1.4709 | 0.3769 |
| 0.032 | 6.9 | 400 | 1.4709 | 0.3769 |
| 0.0332 | 8.62 | 500 | 1.4709 | 0.3769 |
| 0.0327 | 10.34 | 600 | 1.4709 | 0.3769 |
| 0.0331 | 12.07 | 700 | 1.4709 | 0.3769 |
| 0.0334 | 13.79 | 800 | 1.4709 | 0.3769 |
| 0.0319 | 15.52 | 900 | 1.4709 | 0.3769 |
| 0.0338 | 17.24 | 1000 | 1.4709 | 0.3769 |
| 0.0321 | 18.97 | 1100 | 1.4709 | 0.3769 |
| 0.0367 | 20.69 | 1200 | 1.4709 | 0.3769 |
| 0.0331 | 22.41 | 1300 | 1.4709 | 0.3769 |
| 0.0332 | 24.14 | 1400 | 1.4709 | 0.3769 |
| 0.0347 | 25.86 | 1500 | 1.4709 | 0.3769 |
| 0.0319 | 27.59 | 1600 | 1.4709 | 0.3769 |
| 0.0302 | 29.31 | 1700 | 1.4709 | 0.3769 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
mojians/E2E-QA-Mining | mojians | 2022-04-10T02:34:53Z | 22 | 3 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question-generation",
"question-answer mining",
"dataset:squad",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-01T16:03:34Z | ---
datasets:
- squad
tags:
- question-generation
- question-answer mining
widget:
- text: "context: The English name 'Normans' comes from the French words Normans/Normanz, plural of Normant, modern French normand, which is itself borrowed from Old Low Franconian Nortmann 'Northman' or directly from Old Norse Norðmaðr, Latinized variously as Nortmannus, Normannus, or Nordmannus (recorded in Medieval Latin, 9th century) to mean 'Norseman, Viking'. generate questions and answers:"
inference:
parameters:
min_length: 50
license: mit
---
# Model name
## Model description
This model mines the question-answer pairs from a given context in an end2end fashion. It takes a context as an input and generates a list of questions and answers as an output. It is based on a pre-trained `t5-small` model and uses a prompt enigneering technique to train.
#### How to use
The model takes the context (with prompt) as an input sequence and will generate question-answer pairs as an output sequence. The max sequence length is 512 tokens. Inputs should be organized into the following format:
```
context: context text here. generate questions and answers:
```
The input sequence can then be encoded and passed as the `input_ids` argument in the model's `generate()` method.
You can try out the demo in the [E2E-QA-mining space app](https://huggingface.co/spaces/mojians/E2E-QA-mining)
#### Limitations and bias
The model is limited to generating questions in the same style as those found in [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/), The generated questions can potentially be leading or reflect biases that are present in the context. If the context is too short or completely absent, or if the context and answer do not match, the generated question is likely to be incoherent.
## Training data
The model was fine-tuned on a dataset made up of several well-known QA datasets ([SQuAD](https://rajpurkar.github.io/SQuAD-explorer/))
## Source and Citation
Please find our code and cite us in this repo [https://github.com/jian-mo/E2E-QA-Mining](https://github.com/jian-mo/E2E-QA-Mining) |
huggingtweets/rusticgendarme | huggingtweets | 2022-04-09T20:23:24Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/rusticgendarme/1649535793480/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1477404220685008896/bEbHFn3g_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">merz▫️▫️▫️▫️</div>
<div style="text-align: center; font-size: 14px;">@rusticgendarme</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from merz▫️▫️▫️▫️.
| Data | merz▫️▫️▫️▫️ |
| --- | --- |
| Tweets downloaded | 3220 |
| Retweets | 527 |
| Short tweets | 613 |
| Tweets kept | 2080 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yxv7eg1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rusticgendarme's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2eajj2bh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2eajj2bh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rusticgendarme')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Hodiden/autotrain-TestProj-722121991 | Hodiden | 2022-04-09T19:21:44Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"unk",
"dataset:Hodiden/autotrain-data-TestProj",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-09T04:53:23Z | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Hodiden/autotrain-data-TestProj
co2_eq_emissions: 8.052949236815056
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 722121991
- CO2 Emissions (in grams): 8.052949236815056
## Validation Metrics
- Loss: 1.123626708984375
- Rouge1: 56.1275
- Rouge2: 33.5648
- RougeL: 51.986
- RougeLsum: 51.9943
- Gen Len: 13.2823
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Hodiden/autotrain-TestProj-722121991
``` |
alexjercan/codebert-base-buggy-token-classification | alexjercan | 2022-04-09T16:00:35Z | 7 | 2 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-04-04T07:02:54Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: codebert-base-buggy-token-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codebert-base-buggy-token-classification
This model is a fine-tuned version of [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5217
- Precision: 0.6942
- Recall: 0.0940
- F1: 0.1656
- Accuracy: 0.7714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
damlab/GO-language | damlab | 2022-04-09T14:28:07Z | 7 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"dataset:damlab/uniprot",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-04-08T18:26:38Z | ---
license: mit
datasets:
- damlab/uniprot
metrics:
- accuracy
widget:
- text: 'involved_in GO:0006468 involved_in GO:0007165 located_in GO:0042470 involved_in GO:0070372'
example_title: 'Function'
---
# GO-Language model
## Table of Contents
- [Summary](#model-summary)
- [Model Description](#model-description)
- [Intended Uses & Limitations](#intended-uses-&-limitations)
- [How to Use](#how-to-use)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Training](#training)
- [Evaluation Results](#evaluation-results)
- [BibTeX Entry and Citation Info](#bibtex-entry-and-citation-info)
## Summary
This model was built as a way to encode the Gene Ontology definition of a protein as vector representation.
It was trained on a collection of gene-ontology terms from model organisms.
Each function was sorted by the ID number and combined with its annotation description ie (`is_a`, `enables`, `located_in`, etc).
The model is tokenized such that each description and GO term is its own token.
This is intended to be used as a translation model between PROT-BERT and GO-Language.
That type of translation model will be useful for predicting the function of novel genes.
## Model Description
This model was trained using the damlab/uniprot dataset on the `go` field with 256 token chunks and a 15% mask rate.
## Intended Uses & Limitations
This model is a useful encapsulation of gene ontology functions.
It allows both an exploration of gene-level similarities as well as comparisons between functional terms.
## How to use
As this is a BERT-style Masked Language learner, it can be used to determine the most likely token a masked position.
```python
from transformers import pipeline
unmasker = pipeline("fill-mask", model="damlab/GO-language")
unmasker("involved_in [MASK] involved_in GO:0007165 located_in GO:0042470 involved_in GO:0070372")
[{'score': 0.1040298342704773,
'token': 103,
'token_str': 'GO:0002250',
'sequence': 'involved_in GO:0002250 involved_in GO:0007165 located_in GO:0042470 involved_in GO:0070372'},
{'score': 0.018045395612716675,
'token': 21,
'token_str': 'GO:0005576',
'sequence': 'involved_in GO:0005576 involved_in GO:0007165 located_in GO:0042470 involved_in GO:0070372'},
{'score': 0.015035462565720081,
'token': 50,
'token_str': 'GO:0000139',
'sequence': 'involved_in GO:0000139 involved_in GO:0007165 located_in GO:0042470 involved_in GO:0070372'},
{'score': 0.01181247178465128,
'token': 37,
'token_str': 'GO:0007165',
'sequence': 'involved_in GO:0007165 involved_in GO:0007165 located_in GO:0042470 involved_in GO:0070372'},
{'score': 0.01000668853521347,
'token': 14,
'token_str': 'GO:0005737',
'sequence': 'involved_in GO:0005737 involved_in GO:0007165 located_in GO:0042470 involved_in GO:0070372'}
]
```
## Training Data
The dataset was trained using [damlab/uniprot](https://huggingface.co/datasets/damlab/uniprot) from a random initial model.
The Gene Ontology functions were sorted (by ID number) along with annotating term.
## Training Procedure
### Preprocessing
All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.
### Training
Training was performed with the HuggingFace training module using the MaskedLM data loader with a 15% masking rate. The learning rate was set at E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset.
## BibTeX Entry and Citation Info
[More Information Needed]
|
Saitomar/Fellowship-Challenge-CV | Saitomar | 2022-04-09T14:01:46Z | 0 | 0 | null | [
"tensorboard",
"region:us"
] | null | 2022-04-09T10:36:13Z | # Fatima's Fellowship Challenge
This card contains the model checkpoint, and training metrics of the computer vision coding challenge of the fellowship program.
- Epochs : 30
- Batch size : 32
- Learing rate : 0.0005
- Model : ResNet-50
- Optimizer : Adam
- Dataset : CIFAR10
|
edwardjross/xlm-roberta-base-finetuned-recipe-all | edwardjross | 2022-04-09T13:19:55Z | 324 | 14 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"arxiv:2004.12184",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-04-08T14:01:31Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-recipe-all
results: []
widget:
- text: "1 sheet of frozen puff pastry (thawed)"
- text: "1/2 teaspoon fresh thyme, minced"
- text: "2-3 medium tomatoes"
- text: "1 petit oignon rouge"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-recipe-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the recipe ingredient [NER dataset](https://github.com/cosylabiiit/recipe-knowledge-mining) from the paper [A Named Entity Based Approach to Model Recipes](https://arxiv.org/abs/2004.12184) (using both the `gk` and `ar` datasets).
It achieves the following results on the evaluation set:
- Loss: 0.1169
- F1: 0.9672
On the test set it obtains an F1 of 0.9615, slightly above the CRF used in the paper.
## Model description
Predicts tag of each token in an ingredient string.
| Tag | Significance | Example |
| --- | --- | --- |
| NAME | Name of Ingredient | salt, pepper |
| STATE | Processing State of Ingredient. | ground, thawed |
| UNIT | Measuring unit(s). | gram, cup |
| QUANTITY | Quantity associated with the unit(s). | 1, 1 1/2 , 2-4 |
| SIZE | Portion sizes mentioned. | small, large |
| TEMP | Temperature applied prior to cooking. | hot, frozen |
| DF (DRY/FRESH) | Fresh otherwise as mentioned. | dry, fresh |
## Intended uses & limitations
* Only trained on ingredient strings.
* Tags subtokens; tag should be propagated to whole word
* Works best with pre-tokenisation splitting of symbols (such as parentheses) and numbers (e.g. 50g -> 50 g)
* Typically only detects the first ingredient if there are multiple.
* Only trained on two American English data sources
* Tags TEMP and DF have very few training data.
## Training and evaluation data
Both the `ar` (AllRecipes.com) and `gk` (FOOD.com) datasets obtained from the TSVs from the authors' [repository](https://github.com/cosylabiiit/recipe-knowledge-mining).
## Training procedure
It follows the overall procedure from Chapter 4 of [Natural Language Processing with Transformers](https://www.oreilly.com/library/view/natural-language-processing/9781098103231/) by Tunstall, von Wera and Wolf.
See the [training notebook](https://github.com/EdwardJRoss/nlp_transformers_exercises/blob/master/notebooks/ch4-ner-recipe-stanford-crf.ipynb) for details.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2529 | 1.0 | 331 | 0.1303 | 0.9592 |
| 0.1164 | 2.0 | 662 | 0.1224 | 0.9640 |
| 0.0904 | 3.0 | 993 | 0.1156 | 0.9671 |
| 0.0585 | 4.0 | 1324 | 0.1169 | 0.9672 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Yu-Ping/BERT-Fake_News_Classifier | Yu-Ping | 2022-04-09T13:06:21Z | 0 | 0 | null | [
"bert-base-cased",
"text classifier",
"PyTorch",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2022-04-09T07:44:23Z | ---
language:
- en
- TW
thumbnail: https://colab.research.google.com/drive/1L3PvqNjMF-K_ykztrNEqKhky279EcPaN?usp=sharing
tags:
- bert-base-cased
- text classifier
- PyTorch
license: apache-2.0
datasets:
- True.csv (downloaded from https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset)
- Fake.csv (downloaded from https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset)
metrics:
- accuracy
- auc
model-index:
- name: bert-base-cased
results:
- task:
type: fake-news-classifier
name: Text Classification
dataset:
type: news
name: Fake and real news
metrics:
- type: accuracy
value: 90.92%
---
|
tau/tavbert-tr | tau | 2022-04-09T12:55:55Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"language model",
"tr",
"dataset:oscar",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-04-09T12:52:34Z | ---
language: tr
tags:
- roberta
- language model
datasets:
- oscar
---
# TavBERT base model
A Turkish BERT-style masked language model operating over characters, pre-trained by masking spans of characters, similarly to SpanBERT (Joshi et al., 2020).
### How to use
```python
import numpy as np
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer
model = AutoModelForMaskedLM.from_pretrained("tau/tavbert-tr")
tokenizer = AutoTokenizer.from_pretrained("tau/tavbert-tr")
def mask_sentence(sent, span_len=5):
start_pos = np.random.randint(0, len(sent) - span_len)
masked_sent = sent[:start_pos] + '[MASK]' * span_len + sent[start_pos + span_len:]
print("Masked sentence:", masked_sent)
output = model(**tokenizer.encode_plus(masked_sent,
return_tensors='pt'))['logits'][0][1:-1]
preds = [int(x) for x in torch.argmax(torch.softmax(output, axis=1), axis=1)[start_pos:start_pos + span_len]]
pred_sent = sent[:start_pos] + ''.join(tokenizer.convert_ids_to_tokens(preds)) + sent[start_pos + span_len:]
print("Model's prediction:", pred_sent)
```
## Training data
OSCAR (Ortiz, 2019) Turkish section (27 GB text, 77 million sentences).
|
huggingtweets/notsorobot | huggingtweets | 2022-04-09T12:41:46Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-04-08T16:11:26Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1317183233495388160/nLbBT6WF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">3bkreno</div>
<div style="text-align: center; font-size: 14px;">@notsorob</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 3bkreno.
| Data | 3bkreno |
| --- | --- |
| Tweets downloaded | 26419 |
| Retweets | 111 |
| Short tweets | -8796 |
| Tweets kept | 8796 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1l7p1yze/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @notsorob's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ypaq5o5y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ypaq5o5y/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/notsorob')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jicoc22578/autotrain-livedoor_news-722922024 | jicoc22578 | 2022-04-09T10:47:55Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"ja",
"dataset:jicoc22578/autotrain-data-livedoor_news",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-09T10:33:57Z | ---
tags: autotrain
language: ja
widget:
- text: "Windows 11搭載PCを買ったら最低限やっておきたいこと"
- text: "3月デスクトップOSシェア、Windowsが増加しMacが減少"
- text: "raytrek、Core i7-12700HとRTX 3070 Tiを搭載するノートPC"
datasets:
- jicoc22578/autotrain-data-livedoor_news
co2_eq_emissions: 0.019299491458156143
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 722922024
- CO2 Emissions (in grams): 0.019299491458156143
## Validation Metrics
- Loss: 0.19609540700912476
- Accuracy: 0.9457627118644067
- Macro F1: 0.9404319054946133
- Micro F1: 0.9457627118644067
- Weighted F1: 0.9456037443251943
- Macro Precision: 0.9420917371721244
- Micro Precision: 0.9457627118644067
- Weighted Precision: 0.9457910238180336
- Macro Recall: 0.9391783746329772
- Micro Recall: 0.9457627118644067
- Weighted Recall: 0.9457627118644067
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/jicoc22578/autotrain-livedoor_news-722922024
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("jicoc22578/autotrain-livedoor_news-722922024", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("jicoc22578/autotrain-livedoor_news-722922024", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
gdwangh/distilbert-base-uncased-finetuned-cola | gdwangh | 2022-04-09T10:39:17Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-31T14:34:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5197669430092784
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6532
- Matthews Correlation: 0.5198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5228 | 1.0 | 535 | 0.5270 | 0.4212 |
| 0.3448 | 2.0 | 1070 | 0.5360 | 0.5073 |
| 0.2305 | 3.0 | 1605 | 0.6532 | 0.5198 |
| 0.1691 | 4.0 | 2140 | 0.7934 | 0.5171 |
| 0.128 | 5.0 | 2675 | 0.8732 | 0.5166 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Chikashi/t5-small-finetuned-wikihow_3epoch_b4_lr3e-3 | Chikashi | 2022-04-09T08:34:39Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wikihow",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-08T23:02:19Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikihow
metrics:
- rouge
model-index:
- name: t5-small-finetuned-wikihow_3epoch_b4_lr3e-3
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wikihow
type: wikihow
args: all
metrics:
- name: Rouge1
type: rouge
value: 26.7383
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikihow_3epoch_b4_lr3e-3
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikihow dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3400
- Rouge1: 26.7383
- Rouge2: 10.1981
- Rougel: 22.8642
- Rougelsum: 26.0922
- Gen Len: 18.524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.2548 | 0.13 | 5000 | 2.9708 | 22.0519 | 6.7142 | 18.7677 | 21.4627 | 17.9546 |
| 3.1153 | 0.25 | 10000 | 2.9099 | 20.2838 | 5.8365 | 17.5009 | 19.7112 | 18.4981 |
| 3.0478 | 0.38 | 15000 | 2.8763 | 22.8282 | 7.3649 | 19.6843 | 22.2312 | 18.1331 |
| 3.0146 | 0.51 | 20000 | 2.8484 | 23.2465 | 7.4295 | 19.621 | 22.6246 | 18.5115 |
| 2.9572 | 0.64 | 25000 | 2.7902 | 23.8681 | 7.9617 | 20.4984 | 23.2066 | 18.5544 |
| 2.9425 | 0.76 | 30000 | 2.7577 | 23.4402 | 7.5289 | 19.7382 | 22.7941 | 18.4613 |
| 2.9075 | 0.89 | 35000 | 2.7343 | 23.0082 | 7.5408 | 19.8426 | 22.3832 | 18.1218 |
| 2.8705 | 1.02 | 40000 | 2.7136 | 23.9492 | 7.8861 | 20.3675 | 23.3035 | 18.4869 |
| 2.7967 | 1.14 | 45000 | 2.6923 | 24.2394 | 8.2895 | 20.7275 | 23.6127 | 18.3486 |
| 2.7794 | 1.27 | 50000 | 2.6639 | 24.4062 | 8.2481 | 20.8957 | 23.8077 | 18.4258 |
| 2.7776 | 1.4 | 55000 | 2.6321 | 24.6213 | 8.4161 | 21.0528 | 23.968 | 18.351 |
| 2.7397 | 1.53 | 60000 | 2.6116 | 24.16 | 8.3605 | 20.618 | 23.5037 | 18.6049 |
| 2.7199 | 1.65 | 65000 | 2.5846 | 24.2606 | 8.3829 | 20.6274 | 23.6252 | 18.4742 |
| 2.7044 | 1.78 | 70000 | 2.5663 | 25.0452 | 8.896 | 21.4554 | 24.4748 | 18.3143 |
| 2.6928 | 1.91 | 75000 | 2.5365 | 25.1312 | 9.008 | 21.6376 | 24.4963 | 18.5605 |
| 2.6281 | 2.03 | 80000 | 2.5209 | 25.5311 | 9.1521 | 21.729 | 24.8864 | 18.2597 |
| 2.5333 | 2.16 | 85000 | 2.4860 | 25.4834 | 9.2969 | 21.7257 | 24.8802 | 18.3831 |
| 2.5308 | 2.29 | 90000 | 2.4619 | 26.0526 | 9.605 | 22.2178 | 25.4353 | 18.4235 |
| 2.5136 | 2.42 | 95000 | 2.4356 | 25.9434 | 9.6537 | 22.2957 | 25.312 | 18.4647 |
| 2.4801 | 2.54 | 100000 | 2.4098 | 26.1109 | 9.7637 | 22.3844 | 25.4771 | 18.5765 |
| 2.4494 | 2.67 | 105000 | 2.3835 | 26.332 | 9.9472 | 22.4243 | 25.6933 | 18.5985 |
| 2.4393 | 2.8 | 110000 | 2.3590 | 26.6896 | 10.2248 | 22.8743 | 26.0665 | 18.4883 |
| 2.4071 | 2.93 | 115000 | 2.3400 | 26.7383 | 10.1981 | 22.8642 | 26.0922 | 18.524 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
nielsr/segformer-test-v6 | nielsr | 2022-04-09T08:21:01Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"segformer",
"vision",
"image-segmentation",
"dataset:segments/sidewalk-semantic",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2022-04-09T07:53:39Z | ---
license: apache-2.0
tags:
- vision
- image-segmentation
datasets:
- segments/sidewalk-semantic
widget:
- src: https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg
example_title: Brugge
--- |
nikhedward/bart-large-cnn-finetuned-multi-news1 | nikhedward | 2022-04-09T04:51:07Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:multi_news",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-09T02:56:22Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- multi_news
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-multi-news1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: multi_news
type: multi_news
args: default
metrics:
- name: Rouge1
type: rouge
value: 42.1215
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-multi-news1
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the multi_news dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0858
- Rouge1: 42.1215
- Rouge2: 14.9986
- Rougel: 23.4737
- Rougelsum: 36.4212
- Gen Len: 133.703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.1984 | 1.0 | 750 | 2.0858 | 42.1215 | 14.9986 | 23.4737 | 36.4212 | 133.703 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
avialfont/dummy-finetuned-amazon-en-es | avialfont | 2022-04-09T03:35:50Z | 4 | 0 | transformers | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-08T15:20:54Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: avialfont/dummy-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# avialfont/dummy-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.6755
- Validation Loss: 3.8033
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 3627, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2942 | 4.4915 | 0 |
| 6.2878 | 3.9207 | 1 |
| 5.6755 | 3.8033 | 2 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.6
|
MeerAnwar/CodingChallengeFatimaFellowship | MeerAnwar | 2022-04-09T01:35:28Z | 0 | 0 | null | [
"region:us"
] | null | 2022-04-08T14:35:45Z | # 1. Deep Learning for Vision
</p>
Upside down detector: Train a model to detect if images are upside down
* Pick a dataset of natural images (we suggest looking at datasets on the Hugging Face Hub)
* Synthetically turn some of the images upside down. Create a training and test set.
* Build a neural network (using TensorFlow, PyTorch, or any framework you like)
* Train it to classify image orientation until a reasonable accuracy is reached
* Upload the model to the Hugging Face Hub, and add a link to your model below.
* Look at some of the images that were classified incorrectly. Please explain what you might do to improve your model performance on these images in the future (you do not need to implement these suggestions) |
nateraw/test-save-keras-sequential | nateraw | 2022-04-08T20:15:54Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
] | null | 2022-04-08T19:07:35Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training Metrics
Model history needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
lysandre/test-save-keras-sequential | lysandre | 2022-04-08T20:01:39Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
] | null | 2022-04-08T19:32:35Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training Metrics
Model history needed |
bmichele/poetry-generation-nextline-mbart-gut-en-single | bmichele | 2022-04-08T19:13:43Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-08T18:46:39Z | # poetry-generation-nextline-mbart-gut-en-single
* `nextline`: generates a poem line from previous line(s)
* `mbart`: base model is [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
* `gut`: trained on Project Gutenberg data
* `en`: English language
* `single`: uses only last poem line as input for generation |
nateraw/autoencoder-keras-mnist-demo-new | nateraw | 2022-04-08T18:37:12Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
] | null | 2022-04-08T18:37:04Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
## Training Metrics
Model history needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
krinal214/augmented_Squad_Translated | krinal214 | 2022-04-08T18:15:59Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-04-08T15:58:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: augmented_Squad_Translated
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# augmented_Squad_Translated
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1154 | 1.0 | 10835 | 0.5251 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
MaRiOrOsSi/t5-base-finetuned-question-answering | MaRiOrOsSi | 2022-04-08T18:00:14Z | 1,273 | 32 | transformers | [
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"Generative Question Answering",
"en",
"dataset:duorc",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-08T07:36:44Z | ---
language: en
datasets:
- duorc
widget:
- text: "question: Is Giacomo Italian? context: Giacomo is 25 years old and he was born in Tuscany"
- text: "question: Where does Christian come from? context: Christian is a student of UNISI but he come from Caserta"
- text: "question: Is the dog coat grey? context: You have a beautiful dog with a brown coat"
tags:
- Generative Question Answering
---
# T5 for Generative Question Answering
This model is the result produced by Christian Di Maio and Giacomo Nunziati for the Language Processing Technologies exam.
Reference for [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [DuoRC](https://huggingface.co/datasets/duorc) for **Generative Question Answering** by just prepending the *question* to the *context*.
## Code
The code used for T5 training is available at this [repository](https://github.com/nunziati/bert-vs-t5-for-question-answering/blob/main/train_t5_selfrc.py).
## Results
The results are evaluated on:
- DuoRC/SelfRC -> Test Subset
- DuoRC/ParaphraseRC -> Test Subset
- SQUADv1 -> Validation Subset
Removing all tokens not related to dictionary words from the evaluation metrics.
The model used as reference is BERT finetuned on SQUAD v1.
| Model | SelfRC | ParaphraseRC | SQUAD
|--|--|--|--|
| T5-BASE-FINETUNED | **F1**: 49.00 **EM**: 31.38 | **F1**: 28.75 **EM**: 15.18 | **F1**: 63.28 **EM**: 37.24 |
| BERT-BASE-FINETUNED | **F1**: 47.18 **EM**: 30.76 | **F1**: 21.20 **EM**: 12.62 | **F1**: 77.19 **EM**: 57.81 |
## How to use it 🚀
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
model_name = "MaRiOrOsSi/t5-base-finetuned-question-answering"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelWithLMHead.from_pretrained(model_name)
question = "What is 42?"
context = "42 is the answer to life, the universe and everything"
input = f"question: {question} context: {context}"
encoded_input = tokenizer([input],
return_tensors='pt',
max_length=512,
truncation=True)
output = model.generate(input_ids = encoded_input.input_ids,
attention_mask = encoded_input.attention_mask)
output = tokenizer.decode(output[0], skip_special_tokens=True)
print(output)
```
## Citation
Created by [Christian Di Maio](https://it.linkedin.com/in/christiandimaio) and [Giacomo Nunziati](https://it.linkedin.com/in/giacomo-nunziati-b19572185)
> Made with <span style="color: #e25555;">♥</span> in Italy
|
caush/TestMeanFraction2 | caush | 2022-04-08T17:51:14Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"camembert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-08T17:26:33Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: TestMeanFraction2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TestMeanFraction2
This model is a fine-tuned version of [cmarkea/distilcamembert-base](https://huggingface.co/cmarkea/distilcamembert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3967
- Matthews Correlation: 0.2537
## Model description
More information needed
## Intended uses & limitations
"La panique totale" Cette femme trouve une énorme araignée suspendue à sa douche.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 0.13 | 50 | 1.1126 | 0.1589 |
| No log | 0.25 | 100 | 1.0540 | 0.1884 |
| No log | 0.38 | 150 | 1.1533 | 0.0818 |
| No log | 0.51 | 200 | 1.0676 | 0.1586 |
| No log | 0.64 | 250 | 0.9949 | 0.2280 |
| No log | 0.76 | 300 | 1.0343 | 0.2629 |
| No log | 0.89 | 350 | 1.0203 | 0.2478 |
| No log | 1.02 | 400 | 1.0041 | 0.2752 |
| No log | 1.15 | 450 | 1.0808 | 0.2256 |
| 1.023 | 1.27 | 500 | 1.0029 | 0.2532 |
| 1.023 | 1.4 | 550 | 1.0204 | 0.2508 |
| 1.023 | 1.53 | 600 | 1.1377 | 0.1689 |
| 1.023 | 1.65 | 650 | 1.0499 | 0.2926 |
| 1.023 | 1.78 | 700 | 1.0441 | 0.2474 |
| 1.023 | 1.91 | 750 | 1.0279 | 0.2611 |
| 1.023 | 2.04 | 800 | 1.1511 | 0.2804 |
| 1.023 | 2.16 | 850 | 1.2381 | 0.2512 |
| 1.023 | 2.29 | 900 | 1.3340 | 0.2385 |
| 1.023 | 2.42 | 950 | 1.4372 | 0.2842 |
| 0.7325 | 2.54 | 1000 | 1.3967 | 0.2537 |
| 0.7325 | 2.67 | 1050 | 1.4272 | 0.2624 |
| 0.7325 | 2.8 | 1100 | 1.3869 | 0.1941 |
| 0.7325 | 2.93 | 1150 | 1.4983 | 0.2063 |
| 0.7325 | 3.05 | 1200 | 1.4959 | 0.2409 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0a0+0aef44c
- Datasets 2.0.0
- Tokenizers 0.11.6
|
leonadase/bert-base-chinese-finetuned-ner-v1 | leonadase | 2022-04-08T17:49:01Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:fdner",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-04-08T13:26:02Z | ---
tags:
- generated_from_trainer
datasets:
- fdner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-chinese-finetuned-ner-v1
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: fdner
type: fdner
args: fdner
metrics:
- name: Precision
type: precision
value: 0.981203007518797
- name: Recall
type: recall
value: 0.9886363636363636
- name: F1
type: f1
value: 0.9849056603773584
- name: Accuracy
type: accuracy
value: 0.9909536373916321
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-ner-v1
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the fdner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0413
- Precision: 0.9812
- Recall: 0.9886
- F1: 0.9849
- Accuracy: 0.9910
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 8 | 2.0640 | 0.0 | 0.0 | 0.0 | 0.4323 |
| No log | 2.0 | 16 | 1.7416 | 0.0204 | 0.0227 | 0.0215 | 0.5123 |
| No log | 3.0 | 24 | 1.5228 | 0.0306 | 0.0265 | 0.0284 | 0.5456 |
| No log | 4.0 | 32 | 1.2597 | 0.0961 | 0.1591 | 0.1198 | 0.6491 |
| No log | 5.0 | 40 | 1.0273 | 0.1588 | 0.2159 | 0.1830 | 0.7450 |
| No log | 6.0 | 48 | 0.8026 | 0.2713 | 0.3258 | 0.2960 | 0.8208 |
| No log | 7.0 | 56 | 0.6547 | 0.36 | 0.4091 | 0.3830 | 0.8513 |
| No log | 8.0 | 64 | 0.5180 | 0.4650 | 0.5038 | 0.4836 | 0.8873 |
| No log | 9.0 | 72 | 0.4318 | 0.5139 | 0.5606 | 0.5362 | 0.9067 |
| No log | 10.0 | 80 | 0.3511 | 0.6169 | 0.6894 | 0.6512 | 0.9291 |
| No log | 11.0 | 88 | 0.2887 | 0.6691 | 0.6894 | 0.6791 | 0.9414 |
| No log | 12.0 | 96 | 0.2396 | 0.7042 | 0.7576 | 0.7299 | 0.9516 |
| No log | 13.0 | 104 | 0.2052 | 0.7568 | 0.8371 | 0.7950 | 0.9587 |
| No log | 14.0 | 112 | 0.1751 | 0.8303 | 0.8712 | 0.8503 | 0.9610 |
| No log | 15.0 | 120 | 0.1512 | 0.8464 | 0.8977 | 0.8713 | 0.9668 |
| No log | 16.0 | 128 | 0.1338 | 0.8759 | 0.9091 | 0.8922 | 0.9710 |
| No log | 17.0 | 136 | 0.1147 | 0.8959 | 0.9129 | 0.9043 | 0.9746 |
| No log | 18.0 | 144 | 0.1011 | 0.9326 | 0.9432 | 0.9379 | 0.9761 |
| No log | 19.0 | 152 | 0.0902 | 0.9251 | 0.9356 | 0.9303 | 0.9795 |
| No log | 20.0 | 160 | 0.0806 | 0.9440 | 0.9583 | 0.9511 | 0.9804 |
| No log | 21.0 | 168 | 0.0743 | 0.9586 | 0.9659 | 0.9623 | 0.9812 |
| No log | 22.0 | 176 | 0.0649 | 0.9511 | 0.9583 | 0.9547 | 0.9851 |
| No log | 23.0 | 184 | 0.0595 | 0.9591 | 0.9773 | 0.9681 | 0.9876 |
| No log | 24.0 | 192 | 0.0537 | 0.9625 | 0.9735 | 0.9680 | 0.9883 |
| No log | 25.0 | 200 | 0.0505 | 0.9701 | 0.9848 | 0.9774 | 0.9894 |
| No log | 26.0 | 208 | 0.0464 | 0.9737 | 0.9811 | 0.9774 | 0.9904 |
| No log | 27.0 | 216 | 0.0439 | 0.9737 | 0.9811 | 0.9774 | 0.9906 |
| No log | 28.0 | 224 | 0.0428 | 0.9812 | 0.9886 | 0.9849 | 0.9910 |
| No log | 29.0 | 232 | 0.0417 | 0.9812 | 0.9886 | 0.9849 | 0.9910 |
| No log | 30.0 | 240 | 0.0413 | 0.9812 | 0.9886 | 0.9849 | 0.9910 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
malcolm/TSC_finetuning-sentiment-movie-model | malcolm | 2022-04-08T16:44:47Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-08T14:33:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: TSC_finetuning-sentiment-movie-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TSC_finetuning-sentiment-movie-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1480
- Accuracy: 0.9578
- F1: 0.9757
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/lilpeeplyric | huggingtweets | 2022-04-08T15:15:13Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-04-08T15:14:31Z | ---
language: en
thumbnail: http://www.huggingtweets.com/lilpeeplyric/1649430909105/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1445263525878902787/yW8p2-e__400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">lil peep lyrics bot</div>
<div style="text-align: center; font-size: 14px;">@lilpeeplyric</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from lil peep lyrics bot.
| Data | lil peep lyrics bot |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 0 |
| Short tweets | 0 |
| Tweets kept | 3250 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1jgq3lf6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lilpeeplyric's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1lbjza1d) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1lbjza1d/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lilpeeplyric')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nielsr/segformer-test-v5 | nielsr | 2022-04-08T15:05:50Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"segformer",
"vision",
"image-segmentation",
"dataset:segments/sidewalk-semantic",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2022-04-08T14:51:17Z | ---
license: apache-2.0
tags:
- vision
- image-segmentation
datasets:
- segments/sidewalk-semantic
widget:
- src: https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg
example_title: Brugge
--- |
xaqren/sentiment_analysis | xaqren | 2022-04-08T14:59:55Z | 9 | 1 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"exbert",
"en",
"dataset:Confidential",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-05T13:46:58Z | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- Confidential
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Model description [xaqren/sentiment_analysis]
This is a fine-tuned downstream version of the bert-base-uncased model for sentiment analysis, this model is not intended for
further downstream fine-tuning for any other tasks. This model is trained on a classified dataset for text-classification. |
iceboy95/SqueezeNet_VisionQ1_20220512 | iceboy95 | 2022-04-08T14:15:01Z | 0 | 0 | null | [
"license:afl-3.0",
"region:us"
] | null | 2022-04-08T14:09:42Z | ---
license: afl-3.0
---
## Description
SqueezeNet from PyTorch-zoo, pretrained with ImageNet and fine-tuned with scenic dataset from kaggle https://www.kaggle.com/datasets/arnaud58/landscape-pictures
## Results
Trained with 8K samples, tested with 120++ non-overlapping samples.
Accuracy: 0.978261
f1-score: 0.978417
|
philschmid/MiniLMv2-L6-H384-sst2 | philschmid | 2022-04-08T13:56:53Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-08T13:54:14Z | ---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: MiniLMv2-L6-H384-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9197247706422018
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L6-H384-sst2
This model is a fine-tuned version of [nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2532
- Accuracy: 0.9197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5787 | 1.0 | 264 | 0.3496 | 0.8624 |
| 0.3413 | 2.0 | 528 | 0.2599 | 0.8991 |
| 0.2716 | 3.0 | 792 | 0.2651 | 0.9048 |
| 0.2343 | 4.0 | 1056 | 0.2532 | 0.9197 |
| 0.2165 | 5.0 | 1320 | 0.2636 | 0.9151 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
profoz/mlops-demo | profoz | 2022-04-08T13:56:10Z | 8 | 1 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"classification",
"sequence-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- classification
- sequence-classification
license: apache-2.0
---
Github repository [here](https://github.com/sinanuozdemir/oreilly-transformers-nlp) |
gymball/FatimaFellowship-UpsideDown | gymball | 2022-04-08T12:18:42Z | 0 | 0 | null | [
"Image Classification",
"en",
"dataset:cifar100",
"license:unlicense",
"region:us"
] | null | 2022-04-07T16:18:53Z | ---
language:
- en
tags:
- Image Classification
license: unlicense
datasets:
- cifar100
---
This repo contains a model that is capable of detecting upside images.
This is part of my submission for the Fatima Fellowship Selection Task. |
srmukundb/distilbert-base-uncased-finetuned-squad | srmukundb | 2022-04-08T12:08:02Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-04-08T00:45:45Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4104
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2182 | 1.0 | 8235 | 1.2318 |
| 0.9451 | 2.0 | 16470 | 1.2693 |
| 0.7554 | 3.0 | 24705 | 1.4104 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ybelkada/japanese-roberta-question-answering | ybelkada | 2022-04-08T11:38:39Z | 171 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"extractive-qa",
"ja",
"dataset:SkelterLabsInc/JaQuAD",
"license:cc-by-sa-3.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-04-08T08:52:22Z | ---
license: cc-by-sa-3.0
language: ja
tags:
- question-answering
- extractive-qa
pipeline_tag:
- None
datasets:
- SkelterLabsInc/JaQuAD
metrics:
- Exact match
- F1 score
---
# RoBERTa base Japanese - JaQuAD
## Description
A Japanese Question Answering model fine-tuned on [JaQuAD](https://huggingface.co/datasets/SkelterLabsInc/JaQuAD).
Please refer [RoBERTa base Japanese](https://huggingface.co/rinna/japanese-roberta-base) for details about the pre-training model.
The codes for the fine-tuning are available [on this notebook](https://huggingface.co/ybelkada/japanese-roberta-question-answering/blob/main/roberta_ja_qa.ipynb)
## Usage
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
question = 'アレクサンダー・グラハム・ベルは、どこで生まれたの?'
context = 'アレクサンダー・グラハム・ベルは、スコットランド生まれの科学者、発明家、工学者である。世界初の>実用的電話の発明で知られている。'
model = AutoModelForQuestionAnswering.from_pretrained(
'ybelkada/japanese-roberta-question-answering')
tokenizer = AutoTokenizer.from_pretrained(
'ybelkada/japanese-roberta-question-answering')
inputs = tokenizer(
question, context, add_special_tokens=True, return_tensors="pt")
input_ids = inputs["input_ids"].tolist()[0]
outputs = model(**inputs)
answer_start_scores = outputs.start_logits
answer_end_scores = outputs.end_logits
# Get the most likely beginning of answer with the argmax of the score.
answer_start = torch.argmax(answer_start_scores)
# Get the most likely end of answer with the argmax of the score.
# 1 is added to `answer_end` because the index pointed by score is inclusive.
answer_end = torch.argmax(answer_end_scores) + 1
answer = tokenizer.convert_tokens_to_string(
tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
# answer = 'スコットランド'
```
## License
The fine-tuned model is licensed under the [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/) license.
## Miscellaneous
The Q&A widget does not work on that model. Tried also with ```Pipeline``` and I was able to reproduce the error, needs a further investigation
|
huggingtweets/emarobot | huggingtweets | 2022-04-08T11:13:49Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-04-08T11:12:40Z | ---
language: en
thumbnail: http://www.huggingtweets.com/emarobot/1649416424059/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1317183233495388160/nLbBT6WF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">3bkreno</div>
<div style="text-align: center; font-size: 14px;">@emarobot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 3bkreno.
| Data | 3bkreno |
| --- | --- |
| Tweets downloaded | 970 |
| Retweets | 111 |
| Short tweets | 129 |
| Tweets kept | 841 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/mfd65acm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @emarobot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1i5j7avt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1i5j7avt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/emarobot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
marcosfp/distilbert-base-uncased-finetuned-objectivity-rotten | marcosfp | 2022-04-08T11:10:02Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-08T10:59:03Z | ---
license: gpl-3.0
---
Objectivity sentence classification model based on **distilbert-base-uncased-finetuned-sst-2-english**. It was fine-tuned with Rotten-IMDB movie review [data](http://www.cs.cornell.edu/people/pabo/movie-review-data/) using extracted sentences from film plots as objective examples and review comments as subjective language examples.
With a test set of 5%, we obtained an accuracy of 96% and f1 of the same value.
Please, feel free to try the demo online with subjective language examples like "I think...", "I believe...", and more objective claims.
For any further comments contact me, at [email protected].
|
afbudiman/distilled-indobert-classification | afbudiman | 2022-04-08T09:32:57Z | 9 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:indonlu",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-08T06:49:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- indonlu
metrics:
- accuracy
- f1
model-index:
- name: distilled-indobert-classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: indonlu
type: indonlu
args: smsa
metrics:
- name: Accuracy
type: accuracy
value: 0.9015873015873016
- name: F1
type: f1
value: 0.9014926755197933
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-indobert-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the indonlu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6015
- Accuracy: 0.9016
- F1: 0.9015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0427 | 1.0 | 688 | 0.6306 | 0.8683 | 0.8684 |
| 0.5332 | 2.0 | 1376 | 0.5621 | 0.8794 | 0.8779 |
| 0.3021 | 3.0 | 2064 | 0.6785 | 0.8905 | 0.8896 |
| 0.1851 | 4.0 | 2752 | 0.6085 | 0.8968 | 0.8959 |
| 0.1152 | 5.0 | 3440 | 0.6015 | 0.9016 | 0.9015 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
btjiong/robbert-twitter-sentiment-custom | btjiong | 2022-04-08T08:17:25Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:dutch_social",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-07T18:07:19Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- dutch_social
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: robbert-twitter-sentiment-custom
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: dutch_social
type: dutch_social
args: dutch_social
metrics:
- name: Accuracy
type: accuracy
value: 0.788
- name: F1
type: f1
value: 0.7878005279207152
- name: Precision
type: precision
value: 0.7877102066609215
- name: Recall
type: recall
value: 0.788
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robbert-twitter-sentiment-custom
This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on the dutch_social dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6656
- Accuracy: 0.788
- F1: 0.7878
- Precision: 0.7877
- Recall: 0.788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.8287 | 1.0 | 282 | 0.7178 | 0.7007 | 0.6958 | 0.6973 | 0.7007 |
| 0.4339 | 2.0 | 564 | 0.5873 | 0.7667 | 0.7668 | 0.7681 | 0.7667 |
| 0.2045 | 3.0 | 846 | 0.6656 | 0.788 | 0.7878 | 0.7877 | 0.788 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
philschmid/roberta-large-sst2 | philschmid | 2022-04-08T08:03:59Z | 148 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-08T07:27:49Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: roberta-large-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9644495412844036
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-sst2
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1400
- Accuracy: 0.9644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3688 | 1.0 | 264 | 0.1444 | 0.9564 |
| 0.1529 | 2.0 | 528 | 0.1502 | 0.9518 |
| 0.107 | 3.0 | 792 | 0.1388 | 0.9530 |
| 0.0666 | 4.0 | 1056 | 0.1400 | 0.9644 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
sail/poolformer_s24 | sail | 2022-04-08T07:48:50Z | 137 | 1 | transformers | [
"transformers",
"pytorch",
"poolformer",
"image-classification",
"vision",
"dataset:imagenet",
"arxiv:2111.11418",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
---
# PoolFormer (S24 model)
PoolFormer model trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu et al. and first released in [this repository](https://github.com/sail-sg/poolformer).
## Model description
PoolFormer is a model that replaces attention token mixer in transfomrers with extremely simple operator, pooling.
Transformers have shown great potential in computer vision tasks. A common belief is their attention-based token mixer module contributes most to their competence. However, recent works show the attention-based module in transformers can be replaced by spatial MLPs and the resulted models still perform quite well. Based on this observation, we hypothesize that the general architecture of the transformers, instead of the specific token mixer module, is more essential to the model's performance. To verify this, we deliberately replace the attention module in transformers with an embarrassingly simple spatial pooling operator to conduct only the most basic token mixing. Surprisingly, we observe that the derived model, termed as PoolFormer, achieves competitive performance on multiple computer vision tasks. For example, on ImageNet-1K, PoolFormer achieves 82.1% top-1 accuracy, surpassing well-tuned vision transformer/MLP-like baselines DeiT-B/ResMLP-B24 by 0.3%/1.1% accuracy with 35%/52% fewer parameters and 48%/60% fewer MACs. The effectiveness of PoolFormer verifies our hypothesis and urges us to initiate the concept of "MetaFormer", a general architecture abstracted from transformers without specifying the token mixer. Based on the extensive experiments, we argue that MetaFormer is the key player in achieving superior results for recent transformer and MLP-like models on vision tasks. This work calls for more future research dedicated to improving MetaFormer instead of focusing on the token mixer modules. Additionally, our proposed PoolFormer could serve as a starting baseline for future MetaFormer architecture design.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=sail/poolformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import PoolFormerFeatureExtractor, PoolFormerForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = PoolFormerFeatureExtractor.from_pretrained('sail/poolformer_s24')
model = PoolFormerForImageClassification.from_pretrained('sail/poolformer_s24')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The poolformer model was trained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/sail-sg/poolformer/blob/main/train.py#L529-L572).
### Pretraining
The model was trained on TPU-v3s. Training resolution is 224. For all hyperparameters (such as batch size and learning rate), please refer to the original paper.
## Evaluation results
| Model | ImageNet top-1 accuracy | # params | URL |
|---------------------------------------|-------------------------|----------|------------------------------------------------------------------|
| PoolFormer-S12 | 77.2 | 12M | https://huggingface.co/sail/poolformer_s12 |
| **PoolFormer-S24** | **80.3** | **21M** | **https://huggingface.co/sail/poolformer_s24** |
| PoolFormer-S36 | 81.4 | 31M | https://huggingface.co/sail/poolformer_s36 |
| PoolFormer-M36 | 82.1 | 56M | https://huggingface.co/sail/poolformer_m36 |
| PoolFormer-M48 | 82.5 | 73M | https://huggingface.co/sail/poolformer_m48 |
### BibTeX entry and citation info
```bibtex
@article{yu2021metaformer,
title={MetaFormer is Actually What You Need for Vision},
author={Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng},
journal={arXiv preprint arXiv:2111.11418},
year={2021}
}
``` |
Subsets and Splits