modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ACSHCSE/distilbert-base-uncased-finetuned-ner | 9b68dedddb002887c121fe42e8cb513d36d1e1ca | 2022-04-12T08:43:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | ACSHCSE | null | ACSHCSE/distilbert-base-uncased-finetuned-ner | 8 | null | transformers | 13,300 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9230429988974642
- name: Recall
type: recall
value: 0.9365700861393892
- name: F1
type: f1
value: 0.9297573435504469
- name: Accuracy
type: accuracy
value: 0.983176322938345
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0611
- Precision: 0.9230
- Recall: 0.9366
- F1: 0.9298
- Accuracy: 0.9832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2349 | 1.0 | 878 | 0.0736 | 0.9140 | 0.9211 | 0.9175 | 0.9803 |
| 0.0546 | 2.0 | 1756 | 0.0582 | 0.9244 | 0.9368 | 0.9305 | 0.9830 |
| 0.03 | 3.0 | 2634 | 0.0611 | 0.9230 | 0.9366 | 0.9298 | 0.9832 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Raychanan/COVID | 8d423d133617dabacdc420f0018887c556ab0178 | 2022-04-14T23:55:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Raychanan | null | Raychanan/COVID | 8 | null | transformers | 13,301 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [hfl/chinese-bert-wwm-ext](https://huggingface.co/hfl/chinese-bert-wwm-ext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5193
- F1: 0.9546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3803 | 1.0 | 1792 | 0.5110 | 0.9546 |
| 0.4129 | 2.0 | 3584 | 0.5256 | 0.9546 |
| 0.4804 | 3.0 | 5376 | 0.5305 | 0.9546 |
| 0.6571 | 4.0 | 7168 | 0.5583 | 0.9546 |
| 0.6605 | 5.0 | 8960 | 0.5193 | 0.9546 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
MartinoMensio/racism-models-raw-label-epoch-1 | e82fc05fe5d4c47df0cd07fb013533ef4e1e583a | 2022-05-04T16:02:49.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"transformers",
"license:mit"
]
| text-classification | false | MartinoMensio | null | MartinoMensio/racism-models-raw-label-epoch-1 | 8 | null | transformers | 13,302 | ---
language: es
license: mit
widget:
- text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!"
---
### Description
This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022)
We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022)
We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models:
| method | epoch 1 | epoch 3 | epoch 3 | epoch 4 |
|--- |--- |--- |--- |--- |
| raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) |
| m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) |
| m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) |
| regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) |
| w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) |
| w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) |
This model is `raw-label-epoch-1`
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_name = 'raw-label-epoch-1'
tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased")
full_model_path = f'MartinoMensio/racism-models-{model_name}'
model = AutoModelForSequenceClassification.from_pretrained(full_model_path)
pipe = pipeline("text-classification", model = model, tokenizer = tokenizer)
texts = [
'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!',
'Es que los judíos controlan el mundo'
]
print(pipe(texts))
# [{'label': 'racist', 'score': 0.7924597263336182}, {'label': 'non-racist', 'score': 0.9130864143371582}]
```
For more details, see https://github.com/preyero/neatclass22
|
manu/lilt-camembert-dit-base-hf | 27bfcb250eca01bafca6809625b736fc41758185 | 2022-04-19T15:45:33.000Z | [
"pytorch",
"liltrobertalike",
"fill-mask",
"fr",
"dataset:iit-cdip",
"transformers",
"token-classification",
"license:mit",
"autotrain_compatible"
]
| token-classification | false | manu | null | manu/lilt-camembert-dit-base-hf | 8 | null | transformers | 13,303 | ---
language:
- fr
tags:
- token-classification
- fill-mask
license: mit
datasets:
- iit-cdip
---
This model is the combined camembert-base model, with the pretrained lilt checkpoint from the paper "LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding", with the visual backbone built from the pretrained checkpoint "microsoft/dit-base".
*Note:* This model should be fine-tuned, and loaded with the modeling and config files from the branch `improve-dit`.
Original repository: https://github.com/jpWang/LiLT
To use it, it is necessary to fork the modeling and configuration files from the original repository, and load the pretrained model from the corresponding classes (LiLTRobertaLikeVisionConfig, LiLTRobertaLikeVisionForRelationExtraction, LiLTRobertaLikeVisionForTokenClassification, LiLTRobertaLikeVisionModel).
They can also be preloaded with the AutoConfig/model factories as such:
```python
from transformers import AutoModelForTokenClassification, AutoConfig, AutoModel
from path_to_custom_classes import (
LiLTRobertaLikeVisionConfig,
LiLTRobertaLikeVisionForRelationExtraction,
LiLTRobertaLikeVisionForTokenClassification,
LiLTRobertaLikeVisionModel
)
def patch_transformers():
AutoConfig.register("liltrobertalike", LiLTRobertaLikeVisionConfig)
AutoModel.register(LiLTRobertaLikeVisionConfig, LiLTRobertaLikeVisionModel)
AutoModelForTokenClassification.register(LiLTRobertaLikeVisionConfig, LiLTRobertaLikeVisionForTokenClassification)
# etc...
```
To load the model, it is then possible to use:
```python
# patch_transformers() must have been executed beforehand
tokenizer = AutoTokenizer.from_pretrained("camembert-base")
model = AutoModel.from_pretrained("manu/lilt-camembert-dit-base-hf")
model = AutoModelForTokenClassification.from_pretrained("manu/lilt-camembert-dit-base-hf") # to be fine-tuned on a token classification task
``` |
aseifert/comma-mdeberta-v3-base | 512da5ef700db7fdd3f09b90210a41dd8db68999 | 2022-04-16T09:40:45.000Z | [
"pytorch",
"deberta-v2",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | aseifert | null | aseifert/comma-mdeberta-v3-base | 8 | null | transformers | 13,304 | Entry not found |
vumichien/imagegpt-small | adfe0e5be98c684a3aa7dd509c7b0d8496c474de | 2022-04-16T11:53:08.000Z | [
"pytorch",
"imagegpt",
"feature-extraction",
"transformers"
]
| feature-extraction | false | vumichien | null | vumichien/imagegpt-small | 8 | null | transformers | 13,305 | Entry not found |
ttwj-sutd/finetuning-sentiment-model-3000-samples-6pm | 36eedbc4fa0891470ff1d4a6759878ee6b8100b6 | 2022-04-17T10:33:50.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | ttwj-sutd | null | ttwj-sutd/finetuning-sentiment-model-3000-samples-6pm | 8 | null | transformers | 13,306 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuning-sentiment-model-3000-samples-6pm
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Precision
type: precision
value: 0.875
- name: Recall
type: recall
value: 0.8866666666666667
- name: F1
type: f1
value: 0.880794701986755
- name: Accuracy
type: accuracy
value: 0.88
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples-6pm
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2896
- Precision: 0.875
- Recall: 0.8867
- F1: 0.8808
- Accuracy: 0.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 188 | 0.3436 | 0.8633 | 0.8 | 0.8304 | 0.8367 |
| No log | 2.0 | 376 | 0.2896 | 0.875 | 0.8867 | 0.8808 | 0.88 |
| 0.3 | 3.0 | 564 | 0.3330 | 0.8693 | 0.8867 | 0.8779 | 0.8767 |
| 0.3 | 4.0 | 752 | 0.4378 | 0.8766 | 0.9 | 0.8882 | 0.8867 |
| 0.3 | 5.0 | 940 | 0.5198 | 0.8284 | 0.9333 | 0.8777 | 0.87 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
vikasaeta/bert-finetuned-ner | bc8597140e45037c8a020c05d293dc75e1509119 | 2022-04-18T14:15:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | vikasaeta | null | vikasaeta/bert-finetuned-ner | 8 | null | transformers | 13,307 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.931045859452326
- name: Recall
type: recall
value: 0.9498485358465163
- name: F1
type: f1
value: 0.9403532155948018
- name: Accuracy
type: accuracy
value: 0.9857096603284865
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0614
- Precision: 0.9310
- Recall: 0.9498
- F1: 0.9404
- Accuracy: 0.9857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0875 | 1.0 | 1756 | 0.0639 | 0.9167 | 0.9387 | 0.9276 | 0.9833 |
| 0.0332 | 2.0 | 3512 | 0.0595 | 0.9334 | 0.9504 | 0.9418 | 0.9857 |
| 0.0218 | 3.0 | 5268 | 0.0614 | 0.9310 | 0.9498 | 0.9404 | 0.9857 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ndavid/binary-question-classifier-bert | 7d1426243ca6fb12128fd46490dd358fdf3d0548 | 2022-04-18T20:01:36.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | ndavid | null | ndavid/binary-question-classifier-bert | 8 | null | transformers | 13,308 | Entry not found |
migueladarlo/distilbert-depression-mixed | b3ea372deae1e72a52f081801f95103ddc81c9dd | 2022-04-19T10:35:06.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:CLPsych 2015",
"transformers",
"text",
"Twitter",
"license:mit",
"model-index"
]
| text-classification | false | migueladarlo | null | migueladarlo/distilbert-depression-mixed | 8 | null | transformers | 13,309 | ---
language:
- en
license: mit # Example: apache-2.0 or any license from https://huggingface.co/docs/hub/model-repos#list-of-license-identifiers
tags:
- text # Example: audio
- Twitter
datasets:
- CLPsych 2015 # Example: common_voice. Use dataset id from https://hf.co/datasets
metrics:
- accuracy, f1, precision, recall, AUC # Example: wer. Use metric id from https://hf.co/metrics
model-index:
- name: distilbert-depression-mixed
results: []
---
# distilbert-depression-mixed
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) trained on CLPsych 2015 and a scraped dataset, and evaluated on a scraped dataset from Twitter to detect potential users in Twitter for depression.
It achieves the following results on the evaluation set:
- Evaluation Loss: 0.71
- Accuracy: 0.63
- F1: 0.59
- Precision: 0.66
- Recall: 0.53
- AUC: 0.63
## Intended uses & limitations
Feed a corpus of tweets to the model to generate label if input is indicative of a depressed user or not. Label 1 is depressed, Label 0 is not depressed.
Limitation: All token sequences longer than 512 are automatically truncated. Also, training and test data may be contaminated with mislabeled users.
### How to use
You can use this model directly with a pipeline for sentiment analysis:
```python
>>> from transformers import DistilBertTokenizerFast, AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased')
>>> from transformers import DistilBertForSequenceClassification
>>> model = DistilBertForSequenceClassification.from_pretrained(r"distilbert-depression-mixed")
>>> from transformers import pipeline
>>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
>>> tokenizer_kwargs = {'padding':True,'truncation':True,'max_length':512}
>>> result=classifier('pain peko',**tokenizer_kwargs) #For truncation to apply in the pipeline
>>> #Should note that the string passed as the input can be a corpus of tweets concatenated together into one document.
[{'label': 'LABEL_1', 'score': 0.5048992037773132}]
```
Otherwise, download the files and specify within the pipeline the path to the folder that contains the config.json, pytorch_model.bin, and training_args.bin
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.19e-05
- train_batch_size: 16
- eval_batch_size: 16
- weight_decay: 0.06
- num_epochs: 5.0
## Training results
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall | AUC |
|:-----:|:-------------:|:---------------:|:--------:|:--------:|:---------:|:--------:|:--------:|
| 1.0 | 0.68 | 0.66 | 0.61 | 0.54 | 0.60 | 0.50 | 0.60 |
| 2.0 | 0.65 | 0.65 | 0.63 | 0.49 | 0.70 | 0.37 | 0.62 |
| 3.0 | 0.53 | 0.63 | 0.66 | 0.58 | 0.69 | 0.50 | 0.65 |
| 4.0 | 0.39 | 0.66 | 0.67 | 0.61 | 0.69 | 0.54 | 0.67 |
| 5.0 | 0.27 | 0.72 | 0.65 | 0.61 | 0.63 | 0.60 | 0.64 | |
skytnt/gpt2-japanese-lyric-xsmall | 7388bc585887e51e0a3299a729984ca196a00333 | 2022-07-06T05:06:01.000Z | [
"pytorch",
"tf",
"gpt2",
"text-generation",
"ja",
"transformers",
"japanese",
"lm",
"nlp",
"license:mit"
]
| text-generation | false | skytnt | null | skytnt/gpt2-japanese-lyric-xsmall | 8 | 0 | transformers | 13,310 | ---
language: ja
tags:
- ja
- japanese
- gpt2
- text-generation
- lm
- nlp
license: mit
widget:
- text: "桜が咲く"
---
# Japanese GPT2 Lyric Model
## Model description
The model is used to generate Japanese lyrics.
## How to use
```python
import torch
from transformers import T5Tokenizer, GPT2LMHeadModel
tokenizer = T5Tokenizer.from_pretrained("skytnt/gpt2-japanese-lyric-xsmall")
model = GPT2LMHeadModel.from_pretrained("skytnt/gpt2-japanese-lyric-xsmall")
def gen_lyric(prompt_text: str):
prompt_text = "<s>" + prompt_text.replace("\n", "\\n ")
prompt_tokens = tokenizer.tokenize(prompt_text)
prompt_token_ids = tokenizer.convert_tokens_to_ids(prompt_tokens)
prompt_tensor = torch.LongTensor(prompt_token_ids).to(device)
prompt_tensor = prompt_tensor.view(1, -1)
# model forward
output_sequences = model.generate(
input_ids=prompt_tensor,
max_length=512,
top_p=0.95,
top_k=40,
temperature=1.0,
do_sample=True,
early_stopping=True,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
num_return_sequences=1
)
# convert model outputs to readable sentence
generated_sequence = output_sequences.tolist()[0]
generated_tokens = tokenizer.convert_ids_to_tokens(generated_sequence)
generated_text = tokenizer.convert_tokens_to_string(generated_tokens)
generated_text = "\n".join([s.strip() for s in generated_text.split('\\n')]).replace(' ', '\u3000').replace('<s>', '').replace('</s>', '\n\n---end---')
return generated_text
print(gen_lyric("桜が咲く"))
```
## Training data
[Training data](https://data.anyweb.xyz/dataset/lyric.zip) contains 46,449 Japanese lyrics which are collected from [NetEasyMusic](https://music.163.com/) by [lyric_download](https://github.com/SkyTNT/lyric_downlowd)
|
GPL/bioasq-tsdae-msmarco-distilbert-gpl | d36c5a5b55704344c937d6dd9cd2747cf2f47923 | 2022-04-19T16:42:02.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | GPL | null | GPL/bioasq-tsdae-msmarco-distilbert-gpl | 8 | null | sentence-transformers | 13,311 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
nielsr/segformer-trainer-test-bis | a1e2242cfdf7ac4f7ad363ee1c46177e3341e886 | 2022-04-20T07:14:35.000Z | [
"pytorch",
"tensorboard",
"segformer",
"transformers",
"image-segmentation",
"vision",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| image-segmentation | false | nielsr | null | nielsr/segformer-trainer-test-bis | 8 | null | transformers | 13,312 | ---
license: apache-2.0
tags:
- image-segmentation
- vision
- generated_from_trainer
model-index:
- name: segformer-trainer-test-bis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-trainer-test-bis
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3784
- Mean Iou: 0.1424
- Mean Accuracy: 0.1896
- Overall Accuracy: 0.7288
- Accuracy Unlabeled: nan
- Accuracy Flat-road: 0.6651
- Accuracy Flat-sidewalk: 0.9129
- Accuracy Flat-crosswalk: 0.0
- Accuracy Flat-cyclinglane: 0.5829
- Accuracy Flat-parkingdriveway: 0.0184
- Accuracy Flat-railtrack: 0.0
- Accuracy Flat-curb: 0.0
- Accuracy Human-person: 0.0
- Accuracy Human-rider: 0.0
- Accuracy Vehicle-car: 0.8322
- Accuracy Vehicle-truck: 0.0
- Accuracy Vehicle-bus: 0.0
- Accuracy Vehicle-tramtrain: 0.0
- Accuracy Vehicle-motorcycle: 0.0
- Accuracy Vehicle-bicycle: 0.0
- Accuracy Vehicle-caravan: 0.0
- Accuracy Vehicle-cartrailer: 0.0
- Accuracy Construction-building: 0.8930
- Accuracy Construction-door: 0.0
- Accuracy Construction-wall: 0.0025
- Accuracy Construction-fenceguardrail: 0.0
- Accuracy Construction-bridge: 0.0
- Accuracy Construction-tunnel: 0.0
- Accuracy Construction-stairs: 0.0
- Accuracy Object-pole: 0.0008
- Accuracy Object-trafficsign: 0.0
- Accuracy Object-trafficlight: 0.0
- Accuracy Nature-vegetation: 0.8552
- Accuracy Nature-terrain: 0.8507
- Accuracy Sky: 0.8336
- Accuracy Void-ground: 0.0
- Accuracy Void-dynamic: 0.0
- Accuracy Void-static: 0.0
- Accuracy Void-unclear: 0.0
- Iou Unlabeled: nan
- Iou Flat-road: 0.4712
- Iou Flat-sidewalk: 0.7651
- Iou Flat-crosswalk: 0.0
- Iou Flat-cyclinglane: 0.5216
- Iou Flat-parkingdriveway: 0.0178
- Iou Flat-railtrack: 0.0
- Iou Flat-curb: 0.0
- Iou Human-person: 0.0
- Iou Human-rider: 0.0
- Iou Vehicle-car: 0.5696
- Iou Vehicle-truck: 0.0
- Iou Vehicle-bus: 0.0
- Iou Vehicle-tramtrain: 0.0
- Iou Vehicle-motorcycle: 0.0
- Iou Vehicle-bicycle: 0.0
- Iou Vehicle-caravan: 0.0
- Iou Vehicle-cartrailer: 0.0
- Iou Construction-building: 0.4716
- Iou Construction-door: 0.0
- Iou Construction-wall: 0.0024
- Iou Construction-fenceguardrail: 0.0
- Iou Construction-bridge: 0.0
- Iou Construction-tunnel: 0.0
- Iou Construction-stairs: 0.0
- Iou Object-pole: 0.0008
- Iou Object-trafficsign: 0.0
- Iou Object-trafficlight: 0.0
- Iou Nature-vegetation: 0.6813
- Iou Nature-terrain: 0.5513
- Iou Sky: 0.7873
- Iou Void-ground: 0.0
- Iou Void-dynamic: 0.0
- Iou Void-static: 0.0
- Iou Void-unclear: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
surrey-nlp/roberta-large-finetuned-abbr | 5f9cf93a2522c506e9e42c5baf5434cac5fa7992 | 2022-04-30T12:17:08.000Z | [
"pytorch",
"tf",
"roberta",
"token-classification",
"en",
"dataset:surrey-nlp/PLOD-unfiltered",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | surrey-nlp | null | surrey-nlp/roberta-large-finetuned-abbr | 8 | 1 | transformers | 13,313 | ---
model_creators:
- Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan
license: mit
tags:
- generated_from_trainer
datasets:
- surrey-nlp/PLOD-unfiltered
metrics:
- precision
- recall
- f1
- accuracy
language:
- en
widget:
- text: "Light dissolved inorganic carbon (DIC) resulting from the oxidation of hydrocarbons."
- text: "RAFs are plotted for a selection of neurons in the dorsal zone (DZ) of auditory cortex in Figure 1."
- text: "Images were acquired using a GE 3.0T MRI scanner with an upgrade for echo-planar imaging (EPI)."
model-index:
- name: roberta-large-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: surrey-nlp/PLOD-unfiltered
type: token-classification
args: PLODunfiltered
metrics:
- name: Precision
type: precision
value: 0.9662545190541101
- name: Recall
type: recall
value: 0.9627013733169376
- name: F1
type: f1
value: 0.9644746737300262
- name: Accuracy
type: accuracy
value: 0.9607518572002093
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-ner
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the [PLOD-unfiltered](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1393
- Precision: 0.9663
- Recall: 0.9627
- F1: 0.9645
- Accuracy: 0.9608
## Model description
RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
More information needed
## Training and evaluation data
The model is fine-tuned using [PLOD-Unfiltered](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) dataset.
This dataset is used for training and evaluating the model. The PLOD Dataset is published at LREC 2022. The dataset can help build sequence labeling models for the task of Abbreviation Detection.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1281 | 1.0 | 14233 | 0.1300 | 0.9557 | 0.9436 | 0.9496 | 0.9457 |
| 0.1056 | 2.0 | 28466 | 0.1076 | 0.9620 | 0.9552 | 0.9586 | 0.9545 |
| 0.0904 | 3.0 | 42699 | 0.1054 | 0.9655 | 0.9585 | 0.9620 | 0.9583 |
| 0.0743 | 4.0 | 56932 | 0.1145 | 0.9658 | 0.9602 | 0.9630 | 0.9593 |
| 0.0523 | 5.0 | 71165 | 0.1206 | 0.9664 | 0.9619 | 0.9641 | 0.9604 |
| 0.044 | 6.0 | 85398 | 0.1393 | 0.9663 | 0.9627 | 0.9645 | 0.9608 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Intel/camembert-base-mrpc-int8-dynamic | 0912fc264746a2e9554f92541463bd261b307309 | 2022-06-10T02:41:36.000Z | [
"pytorch",
"camembert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"text-classfication",
"int8",
"Intel® Neural Compressor",
"PostTrainingDynamic",
"license:mit",
"model-index"
]
| text-classification | false | Intel | null | Intel/camembert-base-mrpc-int8-dynamic | 8 | null | transformers | 13,314 | ---
language:
- en
license: mit
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingDynamic
datasets:
- glue
metrics:
- f1
model-index:
- name: camembert-base-mrpc-int8-dynamic
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: F1
type: f1
value: 0.8842832469775476
---
# INT8 camembert-base-mrpc
### Post-training dynamic quantization
This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [camembert-base-mrpc](https://huggingface.co/Intel/camembert-base-mrpc).
The linear module **roberta.encoder.layer.6.attention.self.query** falls back to fp32 to meet the 1% relative accuracy loss.
### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.8843|0.8928|
| **Model size (MB)** |180|422|
### Load with Intel® Neural Compressor:
```python
from neural_compressor.utils.load_huggingface import OptimizedModel
int8_model = OptimizedModel.from_pretrained(
'Intel/camembert-base-mrpc-int8-dynamic',
)
```
|
okho0653/distilbert-base-uncased-few-shot-sentiment-model | 85ca16f1b5c1e7c27c12bb40b17bfdf95efc45d2 | 2022-04-21T12:28:05.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | okho0653 | null | okho0653/distilbert-base-uncased-few-shot-sentiment-model | 8 | null | transformers | 13,315 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-few-shot-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-few-shot-sentiment-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6819
- Accuracy: 0.75
- F1: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
IDEA-CCNL/Yuyuan-Bart-139M | 993df290ca93b2d2dd28368c0c727507e2d2a320 | 2022-04-24T10:03:04.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"arxiv:2204.03905",
"transformers",
"biobart",
"biomedical",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | IDEA-CCNL | null | IDEA-CCNL/Yuyuan-Bart-139M | 8 | 1 | transformers | 13,316 | ---
language:
- en
license: apache-2.0
tags:
- bart
- biobart
- biomedical
inference: true
widget:
- text: "Influenza is a <mask> disease."
- type: "text-generation"
---
# Yuyuan-Bart-139M, one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
The Yuyuan-Bart-139M is a biomedical generative language model jointly produced by Tsinghua University and International Digital Economy Academy.
Paper: [BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model](https://arxiv.org/pdf/2204.03905.pdf)
## Pretraining Corpora
We use PubMed abstracts as the pretraining corpora. The corpora contain about 41 GB of biomedical research paper abstracts on PubMed.
## Pretraining Setup
We continuously pretrain base versions of BART for 120k steps with a batch size of 2560. We use the same vocabulary as BART to tokenize the texts. Although the input length limitation of BART is 1024, the tokenized PubMed abstracts rarely exceed 512. Therefore, for the sake of training efficiency, we truncate all the input texts to 512 maximum length. We mask 30% of the input tokens and the masked span length is determined by sampling from a Poisson distribution (λ = 3) as used in BART. We use a learning rate scheduler of 0.02 warm-up ratio and linear decay. The learning rate is set to 1e-4. We train the base version of BioBART(139M parameters) on 2 DGX with 16 40GB A100 GPUs for about 100 hours with the help of the open-resource framework DeepSpeed.
## Usage
```python
from transformers import BartForConditionalGeneration, BartTokenizer
tokenizer = BartTokenizer.from_pretrained('IDEA-CCNL/Yuyuan-Bart-139M')
model = BartForConditionalGeneration.from_pretrained('IDEA-CCNL/Yuyuan-Bart-139M')
text = 'Influenza is a <mask> disease.'
input_ids = tokenizer([text], return_tensors="pt")['input_ids']
model.eval()
generated_ids = model.generate(
input_ids=input_ids,
)
preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generated_ids]
print(preds)
```
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{BioBART,
title={BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model},
author={Hongyi Yuan and Zheng Yuan and Ruyi Gan and Jiaxing Zhang and Yutao Xie and Sheng Yu},
year={2022},
eprint={2204.03905},
archivePrefix={arXiv}
}
``` |
mldev/bert-finetuned-ner | 3ee99cdb9fb2751b0c42dd2cdff0f4b41ecf671f | 2022-04-24T21:28:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | mldev | null | mldev/bert-finetuned-ner | 8 | 1 | transformers | 13,317 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9343150231634679
- name: Recall
type: recall
value: 0.9503534163581285
- name: F1
type: f1
value: 0.9422659769731353
- name: Accuracy
type: accuracy
value: 0.9865926885265203
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0595
- Precision: 0.9343
- Recall: 0.9504
- F1: 0.9423
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0834 | 1.0 | 1756 | 0.0621 | 0.9148 | 0.9381 | 0.9263 | 0.9833 |
| 0.0321 | 2.0 | 3512 | 0.0615 | 0.9265 | 0.9482 | 0.9372 | 0.9851 |
| 0.0218 | 3.0 | 5268 | 0.0595 | 0.9343 | 0.9504 | 0.9423 | 0.9866 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Alassea/reviews-generator | 25da911a73d78d9db49c586adf6147373f7a6b7a | 2022-04-26T12:59:27.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | Alassea | null | Alassea/reviews-generator | 8 | null | transformers | 13,318 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: reviews-generator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reviews-generator
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7955 | 0.08 | 500 | 3.5578 |
| 3.7486 | 0.16 | 1000 | 3.4989 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
cassiepowell/RoBERTa-large-mnli-for-agreement | e6bb78ecf96eaaa254ca6bfee6682f0d836fa70a | 2022-04-28T15:27:32.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | cassiepowell | null | cassiepowell/RoBERTa-large-mnli-for-agreement | 8 | null | transformers | 13,319 | Entry not found |
rycont/koelectra-bible-classifier | 27235d52de55dea807080ceaba02d7c27f3973ab | 2022-04-28T08:20:04.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | rycont | null | rycont/koelectra-bible-classifier | 8 | null | transformers | 13,320 | Entry not found |
Rerare/distilbert-base-uncased-finetuned-cola | 3cdff2247e0cbc019bb5587616dc8f3c7be296e4 | 2022-04-29T02:19:11.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Rerare | null | Rerare/distilbert-base-uncased-finetuned-cola | 8 | null | transformers | 13,321 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5291140309961344
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7643
- Matthews Correlation: 0.5291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5288 | 1.0 | 535 | 0.5111 | 0.4154 |
| 0.3546 | 2.0 | 1070 | 0.5285 | 0.4887 |
| 0.235 | 3.0 | 1605 | 0.5950 | 0.5153 |
| 0.1722 | 4.0 | 2140 | 0.7643 | 0.5291 |
| 0.1346 | 5.0 | 2675 | 0.8441 | 0.5185 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
it5/it5-efficient-small-el32-informal-to-formal | ee547797b56721d0c94cf569022b06a231949270 | 2022-04-29T15:15:04.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:yahoo/xformal_it",
"arxiv:2203.03759",
"arxiv:2109.10686",
"transformers",
"italian",
"sequence-to-sequence",
"style-transfer",
"efficient",
"formality-style-transfer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | it5 | null | it5/it5-efficient-small-el32-informal-to-formal | 8 | null | transformers | 13,322 | ---
language:
- it
license: apache-2.0
tags:
- italian
- sequence-to-sequence
- style-transfer
- efficient
- formality-style-transfer
datasets:
- yahoo/xformal_it
widget:
- text: "maronn qualcuno mi spieg' CHECCOSA SUCCEDE?!?!"
- text: "wellaaaaaaa, ma fraté sei proprio troppo simpatiko, grazieeee!!"
- text: "nn capisco xke tt i ragazzi lo fanno"
- text: "IT5 è SUPERMEGA BRAVISSIMO a capire tt il vernacolo italiano!!!"
metrics:
- rouge
- bertscore
model-index:
- name: it5-efficient-small-el32-informal-to-formal
results:
- task:
type: formality-style-transfer
name: "Informal-to-formal Style Transfer"
dataset:
type: xformal_it
name: "XFORMAL (Italian Subset)"
metrics:
- type: rouge1
value: 0.430
name: "Avg. Test Rouge1"
- type: rouge2
value: 0.221
name: "Avg. Test Rouge2"
- type: rougeL
value: 0.408
name: "Avg. Test RougeL"
- type: bertscore
value: 0.630
name: "Avg. Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
---
# IT5 Cased Small Efficient EL32 for Informal-to-formal Style Transfer 🧐
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on Informal-to-formal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
i2f = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-informal-to-formal')
i2f("nn capisco xke tt i ragazzi lo fanno")
>>> [{"generated_text": "non comprendo perché tutti i ragazzi agiscono così"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-informal-to-formal")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-informal-to-formal")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
doc2query/msmarco-hindi-mt5-base-v1 | 503282cf505231c52866abc85b0d3ef6bc2a2aca | 2022-04-29T11:56:03.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"hi",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | doc2query | null | doc2query/msmarco-hindi-mt5-base-v1 | 8 | null | transformers | 13,323 | ---
language: hi
datasets:
- unicamp-dl/mmarco
widget:
- text: "पाइथन एक सामान्य कार्यों के लिए उपयुक्त, उच्च स्तरीय प्रोग्रामिंग भाषा (General Purpose and High Level Programming language), इन्टरैक्टिव, ऑब्जेक्ट ओरिएन्टेड, स्क्रिप्टिंग भाषा है। इस भाषा को इस तरह से डिजाइन किया गया है ताकि इसमें लिखे गए कोड आसानी से पढ़े और समझे जा सकें।"
license: apache-2.0
---
# doc2query/msmarco-hindi-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-hindi-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "पाइथन एक सामान्य कार्यों के लिए उपयुक्त, उच्च स्तरीय प्रोग्रामिंग भाषा (General Purpose and High Level Programming language), इन्टरैक्टिव, ऑब्जेक्ट ओरिएन्टेड, स्क्रिप्टिंग भाषा है। इस भाषा को इस तरह से डिजाइन किया गया है ताकि इसमें लिखे गए कोड आसानी से पढ़े और समझे जा सकें।"
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
FremyCompany/tmpxcg_kes9 | fcd738f2c20cf381e452a2c77a47dfde66d91af5 | 2022-04-29T16:05:37.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | FremyCompany | null | FremyCompany/tmpxcg_kes9 | 8 | null | transformers | 13,324 | Entry not found |
shahidul034/drug_sentiment_analysis | 5b062dbc1305a7f370a1b0c1302cc4647b4655c6 | 2022-04-30T11:22:10.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | shahidul034 | null | shahidul034/drug_sentiment_analysis | 8 | null | transformers | 13,325 | Entry not found |
rbesaleli/t5-regex-summarization | cf90e650a19297b2a11ecc9b82393f7648314ab5 | 2022-05-01T22:39:15.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | rbesaleli | null | rbesaleli/t5-regex-summarization | 8 | null | transformers | 13,326 | Entry not found |
ali2066/DistilBERTFINAL_ctxSentence_TRAIN_all_TEST_french_second_train_set_french_False | 1637975f24e1939c854b45ef4edb64405f5bd288 | 2022-06-20T01:54:34.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | ali2066 | null | ali2066/DistilBERTFINAL_ctxSentence_TRAIN_all_TEST_french_second_train_set_french_False | 8 | null | transformers | 13,327 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: _ctxSentence_TRAIN_all_TEST_french_second_train_set_french_False
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# _ctxSentence_TRAIN_all_TEST_french_second_train_set_french_False
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4936
- Precision: 0.8189
- Recall: 0.9811
- F1: 0.8927
- Accuracy: 0.8120
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 13 | 0.5150 | 0.7447 | 1.0 | 0.8537 | 0.7447 |
| No log | 2.0 | 26 | 0.5565 | 0.7447 | 1.0 | 0.8537 | 0.7447 |
| No log | 3.0 | 39 | 0.5438 | 0.7778 | 1.0 | 0.8750 | 0.7872 |
| No log | 4.0 | 52 | 0.5495 | 0.7778 | 1.0 | 0.8750 | 0.7872 |
| No log | 5.0 | 65 | 0.5936 | 0.7778 | 1.0 | 0.8750 | 0.7872 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
milyiyo/paraphraser-spanish-t5-small | 50c64bdfef2f25b0633137bcd4746043d06891d3 | 2022-05-02T21:52:21.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | milyiyo | null | milyiyo/paraphraser-spanish-t5-small | 8 | null | transformers | 13,328 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: paraphraser-spanish-t5-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paraphraser-spanish-t5-small
This model is a fine-tuned version of [flax-community/spanish-t5-small](https://huggingface.co/flax-community/spanish-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.1307
- eval_runtime: 11.172
- eval_samples_per_second: 162.37
- eval_steps_per_second: 16.291
- epoch: 0.51
- step: 14380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
armanc/affiliations-roberta-base-0.0.1-0.203 | aedf737185e2788f10fefe047230785974f1515a | 2022-05-03T00:44:46.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | false | armanc | null | armanc/affiliations-roberta-base-0.0.1-0.203 | 8 | null | transformers | 13,329 | Entry not found |
pietrolesci/t5v1_1-base-mnli_snli_anli | 2388064dc3ae02bb36c80d321d8f2acb7231daef | 2022-05-03T14:46:07.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | pietrolesci | null | pietrolesci/t5v1_1-base-mnli_snli_anli | 8 | null | transformers | 13,330 | ## Overview
T5-Base v1.1 model trained to generate hypotheses given a premise and a label. Below the settings used to train it.
```yaml
Experiment configurations
├── datasets
│ └── snli_train:
│ dataset_name: snli
│ dataset_config_name: null
│ cache_dir: null
│ input_fields:
│ - premise
│ - hypothesis
│ target_field: label
│ train_subset_names: null
│ val_subset_names: validation
│ test_subset_names: none
│ train_val_split: null
│ limit_train_samples: null
│ limit_val_samples: null
│ limit_test_samples: null
│ sampling_kwargs:
│ sampling_strategy: random
│ seed: 42
│ replace: false
│ align_labels_with_mapping: null
│ avoid_consistency_check: false
│ predict_label_mapping: null
│ anli_train:
│ dataset_name: anli
│ dataset_config_name: null
│ cache_dir: null
│ input_fields:
│ - premise
│ - hypothesis
│ target_field: label
│ train_subset_names:
│ - train_r1
│ - train_r2
│ - train_r3
│ val_subset_names:
│ - dev_r1
│ - dev_r2
│ - dev_r3
│ test_subset_names: none
│ train_val_split: null
│ limit_train_samples: null
│ limit_val_samples: null
│ limit_test_samples: null
│ sampling_kwargs:
│ sampling_strategy: random
│ seed: 42
│ replace: false
│ align_labels_with_mapping: null
│ avoid_consistency_check: false
│ predict_label_mapping: null
│ mnli_train:
│ dataset_name: multi_nli
│ dataset_config_name: null
│ cache_dir: null
│ input_fields:
│ - premise
│ - hypothesis
│ target_field: label
│ train_subset_names: null
│ val_subset_names: validation_matched
│ test_subset_names: none
│ train_val_split: null
│ limit_train_samples: null
│ limit_val_samples: null
│ limit_test_samples: null
│ sampling_kwargs:
│ sampling_strategy: random
│ seed: 42
│ replace: false
│ align_labels_with_mapping: null
│ avoid_consistency_check: false
│ predict_label_mapping: null
│ snli:
│ dataset_name: snli
│ dataset_config_name: null
│ cache_dir: null
│ input_fields:
│ - premise
│ - hypothesis
│ target_field: label
│ train_subset_names: none
│ val_subset_names: none
│ test_subset_names: null
│ train_val_split: null
│ limit_train_samples: null
│ limit_val_samples: null
│ limit_test_samples: null
│ sampling_kwargs:
│ sampling_strategy: random
│ seed: 42
│ replace: false
│ align_labels_with_mapping: null
│ avoid_consistency_check: false
│ predict_label_mapping: null
│ anli:
│ dataset_name: anli
│ dataset_config_name: null
│ cache_dir: null
│ input_fields:
│ - premise
│ - hypothesis
│ target_field: label
│ train_subset_names: none
│ val_subset_names: none
│ test_subset_names:
│ - test_r1
│ - test_r2
│ - test_r3
│ train_val_split: null
│ limit_train_samples: null
│ limit_val_samples: null
│ limit_test_samples: null
│ sampling_kwargs:
│ sampling_strategy: random
│ seed: 42
│ replace: false
│ align_labels_with_mapping: null
│ avoid_consistency_check: false
│ predict_label_mapping: null
│ mnli:
│ dataset_name: multi_nli
│ dataset_config_name: null
│ cache_dir: null
│ input_fields:
│ - premise
│ - hypothesis
│ target_field: label
│ train_subset_names: none
│ val_subset_names: none
│ test_subset_names: validation_mismatched
│ train_val_split: null
│ limit_train_samples: null
│ limit_val_samples: null
│ limit_test_samples: null
│ sampling_kwargs:
│ sampling_strategy: random
│ seed: 42
│ replace: false
│ align_labels_with_mapping: null
│ avoid_consistency_check: false
│ predict_label_mapping: null
│
├── data
│ └── _target_: src.task.nli.data.NLIGenerationData.from_config
│ main_dataset_name: null
│ use_additional_as_test: null
│ dataloader:
│ batch_size: 96
│ eval_batch_size: 96
│ num_workers: 8
│ pin_memory: true
│ drop_last: false
│ persistent_workers: false
│ shuffle: true
│ seed_dataloader: 42
│ replacement: false
│ processing:
│ preprocessing_num_workers: 8
│ preprocessing_batch_size: 1000
│ load_from_cache_file: true
│ padding: longest
│ truncation: longest_first
│ max_source_length: 128
│ max_target_length: 128
│ template: 'premise: $premise $label hypothesis: '
│ tokenizer:
│ _target_: transformers.AutoTokenizer.from_pretrained
│ pretrained_model_name_or_path: pietrolesci/t5-v1_1-base_nli_gen
│ use_fast: true
│
├── task
│ └── optimizer:
│ name: Adafactor
│ lr: 0.001
│ weight_decay: 0.0
│ no_decay:
│ - bias
│ - LayerNorm.weight
│ decay_rate: -0.8
│ clip_threshold: 1.0
│ relative_step: false
│ scale_parameter: false
│ warmup_init: false
│ scheduler:
│ name: constant_schedule
│ model:
│ model_name_or_path: pietrolesci/t5-v1_1-base_nli_gen
│ checkpoint_path: null
│ freeze: false
│ seed_init_weight: 42
│ _target_: src.task.nli.NLIGenerationTask.from_config
│ generation:
│ generation_max_length: 128
│ generation_min_length: 3
│ do_sample: true
│ early_stopping: false
│ num_beams: 1
│ temperature: 1.0
│ top_k: 50
│ top_p: 0.95
│ repetition_penalty: null
│ length_penalty: null
│ no_repeat_ngram_size: null
│ encoder_no_repeat_ngram_size: null
│ num_return_sequences: 1
│ max_time: null
│ max_new_tokens: null
│ decoder_start_token_id: null
│ use_cache: null
│ num_beam_groups: null
│ diversity_penalty: null
│
├── trainer
│ └── _target_: pytorch_lightning.Trainer
│ callbacks:
│ lr_monitor:
│ _target_: pytorch_lightning.callbacks.LearningRateMonitor
│ logging_interval: step
│ log_momentum: false
│ model_checkpoint:
│ _target_: pytorch_lightning.callbacks.ModelCheckpoint
│ dirpath: ./checkpoints/
│ filename: nli_generator_sma-epoch={epoch:02d}-val_loss={val/aggregat
│ monitor: val/aggregated_loss
│ mode: min
│ verbose: false
│ save_last: true
│ save_top_k: 1
│ auto_insert_metric_name: false
│ save_on_train_epoch_end: false
│ rich_model_summary:
│ _target_: pytorch_lightning.callbacks.RichModelSummary
│ max_depth: 1
│ log_grad_norm:
│ _target_: src.core.callbacks.LogGradNorm
│ norm_type: 2
│ group_separator: /
│ only_total: true
│ on_step: true
│ on_epoch: false
│ prog_bar: true
│ log_generated_text:
│ _target_: src.core.callbacks.GenerateAndLogText
│ dirpath: ./generated_text
│ type: generated_text
│ pop_keys_after_logging: true
│ on_train: false
│ on_validation: false
│ on_test: true
│ log_to_wandb: true
│ wandb_log_dataset_sizes:
│ _target_: src.core.callbacks.WandbLogDatasetSizes
│ logger:
│ wandb:
│ _target_: pytorch_lightning.loggers.WandbLogger
│ project: nli_debiasing
│ entity: team_brushino
│ name: nli_generator_sma
│ save_dir: ./
│ offline: false
│ log_model: false
│ group: generator
│ job_type: genearator_training
│ tags:
│ - nli_generator_sma
│ - seed=42
│ - seed_dataloader=42
│ notes: nli_generator_sma_time=01-37-04
│ enable_checkpointing: true
│ enable_progress_bar: true
│ enable_model_summary: true
│ gradient_clip_val: 6
│ gradient_clip_algorithm: null
│ accelerator: gpu
│ devices: auto
│ gpus: null
│ auto_select_gpus: true
│ accumulate_grad_batches: 1
│ max_epochs: 2
│ min_epochs: 1
│ max_steps: -1
│ min_steps: null
│ max_time: null
│ num_sanity_val_steps: 2
│ overfit_batches: 0.0
│ fast_dev_run: false
│ limit_train_batches: 1.0
│ limit_val_batches: 1.0
│ limit_test_batches: 1.0
│ profiler: null
│ detect_anomaly: false
│ deterministic: false
│ check_val_every_n_epoch: 1
│ val_check_interval: 0.5
│ log_every_n_steps: 1
│ move_metrics_to_cpu: false
│
└── training
└── run_val_before_fit: false
run_val_after_fit: false
run_test_before_fit: false
run_test_after_fit: true
lr: 0.001
seed: 42
show_batch: false
batch_size: 96
eval_batch_size: 96
num_workers: 8
pin_memory: true
drop_last: false
persistent_workers: false
shuffle: true
seed_dataloader: 42
ignore_warnings: true
experiment_name: nli_generator_sma
``` |
ml4pubmed/albert-base-v2_pub_section | 5a7871486dec5a70884412d8d229a1262d18d2c9 | 2022-05-04T00:09:08.000Z | [
"pytorch",
"albert",
"text-classification",
"en",
"dataset:pubmed",
"transformers"
]
| text-classification | false | ml4pubmed | null | ml4pubmed/albert-base-v2_pub_section | 8 | null | transformers | 13,331 | ---
language:
- en
datasets:
- pubmed
metrics:
- f1
pipeline_tag: text-classification
widget:
- text: "many pathogenic processes and diseases are the result of an erroneous activation of the complement cascade and a number of inhibitors of complement have thus been examined for anti-inflammatory actions."
example_title: "background example"
- text: "a total of 192 mi patients and 140 control persons were included."
example_title: "methods example"
- text: "mi patients had 18 % higher plasma levels of map44 (iqr 11-25 %) as compared to the healthy control group (p < 0. 001.)"
example_title: "results example"
- text: "the finding that a brief cb group intervention delivered by real-world providers significantly reduced mdd onset relative to both brochure control and bibliotherapy is very encouraging, although effects on continuous outcome measures were small or nonsignificant and approximately half the magnitude of those found in efficacy research, potentially because the present sample reported lower initial depression."
example_title: "conclusions example"
- text: "in order to understand and update the prevalence of myopia in taiwan, a nationwide survey was performed in 1995."
example_title: "objective example"
---
# albert-base-v2_pub_section
- original model file name: textclassifer_albert-base-v2_pubmed_full
- This is a fine-tuned checkpoint of `albert-base-v2` for document section text classification
- possible document section classes are:BACKGROUND, CONCLUSIONS, METHODS, OBJECTIVE, RESULTS,
## metadata
### training_parameters
- date_run: Apr-26-2022_t-04
- huggingface_tag: albert-base-v2
|
NbAiLab/wav2vec2-large-voxrex-npsc-nst | 8290317356df8c6886e6b7f95aad044befdfd4eb | 2022-06-14T14:17:09.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:cc0-1.0",
"model-index"
]
| automatic-speech-recognition | false | NbAiLab | null | NbAiLab/wav2vec2-large-voxrex-npsc-nst | 8 | null | transformers | 13,332 | ---
license: cc0-1.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-voxrex-npsc-nst
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-voxrex-npsc-nst
This model is a fine-tuned version of [KBLab/wav2vec2-large-voxrex](https://huggingface.co/KBLab/wav2vec2-large-voxrex) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0475
- Wer: 0.0514
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:------:|
| 3.3888 | 0.05 | 500 | 3.2558 | 1.0 |
| 2.7683 | 0.11 | 1000 | 2.4163 | 1.0000 |
| 0.6279 | 0.16 | 1500 | 0.3610 | 0.3608 |
| 0.5093 | 0.21 | 2000 | 0.2610 | 0.2776 |
| 0.4024 | 0.26 | 2500 | 0.2219 | 0.2303 |
| 0.3705 | 0.32 | 3000 | 0.1940 | 0.2043 |
| 0.3588 | 0.37 | 3500 | 0.1806 | 0.1822 |
| 0.3312 | 0.42 | 4000 | 0.1611 | 0.1736 |
| 0.3062 | 0.47 | 4500 | 0.1571 | 0.1619 |
| 0.2838 | 0.53 | 5000 | 0.1482 | 0.1552 |
| 0.2896 | 0.58 | 5500 | 0.1406 | 0.1482 |
| 0.2704 | 0.63 | 6000 | 0.1311 | 0.1467 |
| 0.263 | 0.69 | 6500 | 0.1258 | 0.1406 |
| 0.2574 | 0.74 | 7000 | 0.1252 | 0.1343 |
| 0.252 | 0.79 | 7500 | 0.1162 | 0.1279 |
| 0.2355 | 0.84 | 8000 | 0.1161 | 0.1275 |
| 0.2381 | 0.9 | 8500 | 0.1095 | 0.1247 |
| 0.2354 | 0.95 | 9000 | 0.1106 | 0.1250 |
| 0.234 | 1.0 | 9500 | 0.1044 | 0.1186 |
| 0.2094 | 1.05 | 10000 | 0.1052 | 0.1157 |
| 0.2088 | 1.11 | 10500 | 0.1026 | 0.1158 |
| 0.2123 | 1.16 | 11000 | 0.0998 | 0.1120 |
| 0.3087 | 1.21 | 11500 | 0.0971 | 0.1108 |
| 0.1995 | 1.26 | 12000 | 0.0973 | 0.1085 |
| 0.1989 | 1.32 | 12500 | 0.0928 | 0.1063 |
| 0.1993 | 1.37 | 13000 | 0.0920 | 0.1064 |
| 0.1996 | 1.42 | 13500 | 0.0904 | 0.1050 |
| 0.1917 | 1.48 | 14000 | 0.0895 | 0.1051 |
| 0.1857 | 1.53 | 14500 | 0.0889 | 0.1038 |
| 0.1871 | 1.58 | 15000 | 0.0867 | 0.1054 |
| 0.2047 | 1.63 | 15500 | 0.0866 | 0.1017 |
| 0.1845 | 1.69 | 16000 | 0.0865 | 0.1007 |
| 0.178 | 1.74 | 16500 | 0.0835 | 0.0999 |
| 0.1741 | 1.79 | 17000 | 0.0838 | 0.0985 |
| 0.1737 | 1.84 | 17500 | 0.0833 | 0.0966 |
| 0.1713 | 1.9 | 18000 | 0.0799 | 0.0963 |
| 0.1703 | 1.95 | 18500 | 0.0802 | 0.0950 |
| 0.1735 | 2.0 | 19000 | 0.0785 | 0.0926 |
| 0.1619 | 2.06 | 19500 | 0.0785 | 0.0930 |
| 0.1707 | 2.11 | 20000 | 0.0787 | 0.0928 |
| 0.17 | 2.16 | 20500 | 0.0765 | 0.0902 |
| 0.1604 | 2.21 | 21000 | 0.0772 | 0.0918 |
| 0.1576 | 2.27 | 21500 | 0.0745 | 0.0912 |
| 0.1529 | 2.32 | 22000 | 0.0741 | 0.0906 |
| 0.1435 | 2.37 | 22500 | 0.0751 | 0.0888 |
| 0.1526 | 2.42 | 23000 | 0.0734 | 0.0892 |
| 0.1471 | 2.48 | 23500 | 0.0746 | 0.0886 |
| 0.1553 | 2.53 | 24000 | 0.0727 | 0.0872 |
| 0.1641 | 2.58 | 24500 | 0.0720 | 0.0862 |
| 0.1495 | 2.64 | 25000 | 0.0707 | 0.0868 |
| 0.1498 | 2.69 | 25500 | 0.0719 | 0.0864 |
| 0.1438 | 2.74 | 26000 | 0.0703 | 0.0853 |
| 0.1532 | 2.79 | 26500 | 0.0710 | 0.0854 |
| 0.1435 | 2.85 | 27000 | 0.0690 | 0.0847 |
| 0.1486 | 2.9 | 27500 | 0.0683 | 0.0882 |
| 0.1359 | 2.95 | 28000 | 0.0673 | 0.0839 |
| 0.1309 | 3.0 | 28500 | 0.0687 | 0.0843 |
| 0.1312 | 3.06 | 29000 | 0.0696 | 0.0865 |
| 0.1387 | 3.11 | 29500 | 0.0667 | 0.0857 |
| 0.1327 | 3.16 | 30000 | 0.0667 | 0.0845 |
| 0.1251 | 3.21 | 30500 | 0.0662 | 0.0820 |
| 0.1415 | 3.27 | 31000 | 0.0652 | 0.0831 |
| 0.1221 | 3.32 | 31500 | 0.0660 | 0.0822 |
| 0.1337 | 3.37 | 32000 | 0.0658 | 0.0799 |
| 0.1342 | 3.43 | 32500 | 0.0650 | 0.0808 |
| 0.1391 | 3.48 | 33000 | 0.0658 | 0.0791 |
| 0.1351 | 3.53 | 33500 | 0.0654 | 0.0794 |
| 0.1309 | 3.58 | 34000 | 0.0650 | 0.0781 |
| 0.1317 | 3.64 | 34500 | 0.0629 | 0.0783 |
| 0.1326 | 3.69 | 35000 | 0.0637 | 0.0795 |
| 0.1296 | 3.74 | 35500 | 0.0624 | 0.0773 |
| 0.1156 | 3.79 | 36000 | 0.0613 | 0.0759 |
| 0.1242 | 3.85 | 36500 | 0.0627 | 0.0761 |
| 0.1251 | 3.9 | 37000 | 0.0638 | 0.0758 |
| 0.1335 | 3.95 | 37500 | 0.0620 | 0.0756 |
| 0.1374 | 4.01 | 38000 | 0.0628 | 0.0756 |
| 0.1227 | 4.06 | 38500 | 0.0637 | 0.0770 |
| 0.1144 | 4.11 | 39000 | 0.0637 | 0.0775 |
| 0.1222 | 4.16 | 39500 | 0.0630 | 0.0738 |
| 0.1207 | 4.22 | 40000 | 0.0607 | 0.0720 |
| 0.1181 | 4.27 | 40500 | 0.0608 | 0.0724 |
| 0.1259 | 4.32 | 41000 | 0.0608 | 0.0734 |
| 0.1137 | 4.37 | 41500 | 0.0623 | 0.0718 |
| 0.1275 | 4.43 | 42000 | 0.0620 | 0.0721 |
| 0.1218 | 4.48 | 42500 | 0.0599 | 0.0703 |
| 0.1212 | 4.53 | 43000 | 0.0612 | 0.0708 |
| 0.1144 | 4.59 | 43500 | 0.0589 | 0.0702 |
| 0.1199 | 4.64 | 44000 | 0.0589 | 0.0695 |
| 0.1113 | 4.69 | 44500 | 0.0601 | 0.0698 |
| 0.1108 | 4.74 | 45000 | 0.0584 | 0.0695 |
| 0.1196 | 4.8 | 45500 | 0.0596 | 0.0694 |
| 0.1216 | 4.85 | 46000 | 0.0578 | 0.0703 |
| 0.1188 | 4.9 | 46500 | 0.0596 | 0.0684 |
| 0.1122 | 4.95 | 47000 | 0.0584 | 0.0671 |
| 0.1115 | 5.01 | 47500 | 0.0594 | 0.0682 |
| 0.1777 | 5.06 | 48000 | 0.0597 | 0.0682 |
| 0.108 | 5.11 | 48500 | 0.0573 | 0.0691 |
| 0.1132 | 5.16 | 49000 | 0.0583 | 0.0666 |
| 0.1091 | 5.22 | 49500 | 0.0582 | 0.0672 |
| 0.1056 | 5.27 | 50000 | 0.0578 | 0.0674 |
| 0.1027 | 5.32 | 50500 | 0.0574 | 0.0671 |
| 0.1112 | 5.38 | 51000 | 0.0569 | 0.0659 |
| 0.1096 | 5.43 | 51500 | 0.0582 | 0.0662 |
| 0.1098 | 5.48 | 52000 | 0.0576 | 0.0667 |
| 0.1088 | 5.53 | 52500 | 0.0560 | 0.0679 |
| 0.1076 | 5.59 | 53000 | 0.0579 | 0.0664 |
| 0.1037 | 5.64 | 53500 | 0.0556 | 0.0661 |
| 0.1039 | 5.69 | 54000 | 0.0572 | 0.0675 |
| 0.108 | 5.74 | 54500 | 0.0562 | 0.0662 |
| 0.1069 | 5.8 | 55000 | 0.0576 | 0.0663 |
| 0.1066 | 5.85 | 55500 | 0.0564 | 0.0651 |
| 0.0939 | 5.9 | 56000 | 0.0566 | 0.0644 |
| 0.1118 | 5.96 | 56500 | 0.0570 | 0.0650 |
| 0.1111 | 6.01 | 57000 | 0.0563 | 0.0668 |
| 0.1014 | 6.06 | 57500 | 0.0557 | 0.0660 |
| 0.0971 | 6.11 | 58000 | 0.0567 | 0.0667 |
| 0.0932 | 6.17 | 58500 | 0.0559 | 0.0664 |
| 0.1002 | 6.22 | 59000 | 0.0551 | 0.0640 |
| 0.1028 | 6.27 | 59500 | 0.0560 | 0.0629 |
| 0.0992 | 6.32 | 60000 | 0.0547 | 0.0641 |
| 0.0975 | 6.38 | 60500 | 0.0556 | 0.0630 |
| 0.0957 | 6.43 | 61000 | 0.0555 | 0.0632 |
| 0.0931 | 6.48 | 61500 | 0.0546 | 0.0641 |
| 0.0999 | 6.54 | 62000 | 0.0556 | 0.0633 |
| 0.0998 | 6.59 | 62500 | 0.0539 | 0.0628 |
| 0.0991 | 6.64 | 63000 | 0.0559 | 0.0630 |
| 0.1027 | 6.69 | 63500 | 0.0549 | 0.0628 |
| 0.097 | 6.75 | 64000 | 0.0547 | 0.0628 |
| 0.0933 | 6.8 | 64500 | 0.0544 | 0.0633 |
| 0.0919 | 6.85 | 65000 | 0.0535 | 0.0640 |
| 0.0973 | 6.9 | 65500 | 0.0543 | 0.0619 |
| 0.0979 | 6.96 | 66000 | 0.0525 | 0.0620 |
| 0.1076 | 7.01 | 66500 | 0.0529 | 0.0615 |
| 0.0888 | 7.06 | 67000 | 0.0546 | 0.0617 |
| 0.0926 | 7.11 | 67500 | 0.0530 | 0.0636 |
| 0.0902 | 7.17 | 68000 | 0.0540 | 0.0631 |
| 0.1004 | 7.22 | 68500 | 0.0529 | 0.0624 |
| 0.0963 | 7.27 | 69000 | 0.0534 | 0.0631 |
| 0.0946 | 7.33 | 69500 | 0.0534 | 0.0601 |
| 0.0897 | 7.38 | 70000 | 0.0525 | 0.0607 |
| 0.0925 | 7.43 | 70500 | 0.0535 | 0.0599 |
| 0.0883 | 7.48 | 71000 | 0.0518 | 0.0605 |
| 0.0942 | 7.54 | 71500 | 0.0522 | 0.0587 |
| 0.0863 | 7.59 | 72000 | 0.0533 | 0.0593 |
| 0.0894 | 7.64 | 72500 | 0.0529 | 0.0587 |
| 0.0908 | 7.69 | 73000 | 0.0519 | 0.0596 |
| 0.0878 | 7.75 | 73500 | 0.0521 | 0.0585 |
| 0.0949 | 7.8 | 74000 | 0.0524 | 0.0588 |
| 0.0962 | 7.85 | 74500 | 0.0521 | 0.0581 |
| 0.0918 | 7.91 | 75000 | 0.0513 | 0.0579 |
| 0.0933 | 7.96 | 75500 | 0.0522 | 0.0582 |
| 0.0839 | 8.01 | 76000 | 0.0536 | 0.0579 |
| 0.0868 | 8.06 | 76500 | 0.0526 | 0.0577 |
| 0.086 | 8.12 | 77000 | 0.0525 | 0.0590 |
| 0.0801 | 8.17 | 77500 | 0.0533 | 0.0586 |
| 0.0845 | 8.22 | 78000 | 0.0516 | 0.0578 |
| 0.0895 | 8.27 | 78500 | 0.0530 | 0.0583 |
| 0.0841 | 8.33 | 79000 | 0.0515 | 0.0584 |
| 0.0921 | 8.38 | 79500 | 0.0518 | 0.0573 |
| 0.0897 | 8.43 | 80000 | 0.0514 | 0.0583 |
| 0.0889 | 8.49 | 80500 | 0.0508 | 0.0582 |
| 0.1783 | 8.54 | 81000 | 0.0507 | 0.0574 |
| 0.0854 | 8.59 | 81500 | 0.0505 | 0.0580 |
| 0.0855 | 8.64 | 82000 | 0.0513 | 0.0577 |
| 0.0843 | 8.7 | 82500 | 0.0508 | 0.0580 |
| 0.0858 | 8.75 | 83000 | 0.0501 | 0.0578 |
| 0.0814 | 8.8 | 83500 | 0.0509 | 0.0580 |
| 0.0823 | 8.85 | 84000 | 0.0509 | 0.0575 |
| 0.0857 | 8.91 | 84500 | 0.0499 | 0.0599 |
| 0.0787 | 8.96 | 85000 | 0.0505 | 0.0598 |
| 0.0805 | 9.01 | 85500 | 0.0510 | 0.0606 |
| 0.0798 | 9.07 | 86000 | 0.0515 | 0.0603 |
| 0.0812 | 9.12 | 86500 | 0.0507 | 0.0586 |
| 0.0781 | 9.17 | 87000 | 0.0511 | 0.0612 |
| 0.0814 | 9.22 | 87500 | 0.0508 | 0.0589 |
| 0.0821 | 9.28 | 88000 | 0.0507 | 0.0588 |
| 0.0808 | 9.33 | 88500 | 0.0498 | 0.0571 |
| 0.0793 | 9.38 | 89000 | 0.0502 | 0.0574 |
| 0.0791 | 9.43 | 89500 | 0.0498 | 0.0568 |
| 0.0779 | 9.49 | 90000 | 0.0507 | 0.0570 |
| 0.0777 | 9.54 | 90500 | 0.0508 | 0.0573 |
| 0.0816 | 9.59 | 91000 | 0.0493 | 0.0573 |
| 0.0835 | 9.64 | 91500 | 0.0496 | 0.0563 |
| 0.0827 | 9.7 | 92000 | 0.0493 | 0.0559 |
| 0.0904 | 9.75 | 92500 | 0.0492 | 0.0564 |
| 0.0753 | 9.8 | 93000 | 0.0503 | 0.0557 |
| 0.0748 | 9.86 | 93500 | 0.0493 | 0.0554 |
| 0.0759 | 9.91 | 94000 | 0.0499 | 0.0557 |
| 0.0825 | 9.96 | 94500 | 0.0498 | 0.0566 |
| 0.0787 | 10.01 | 95000 | 0.0499 | 0.0561 |
| 0.0804 | 10.07 | 95500 | 0.0499 | 0.0562 |
| 0.0784 | 10.12 | 96000 | 0.0500 | 0.0555 |
| 0.0747 | 10.17 | 96500 | 0.0497 | 0.0548 |
| 0.0748 | 10.22 | 97000 | 0.0492 | 0.0565 |
| 0.0732 | 10.28 | 97500 | 0.0493 | 0.0547 |
| 0.0766 | 10.33 | 98000 | 0.0490 | 0.0552 |
| 0.0762 | 10.38 | 98500 | 0.0504 | 0.0551 |
| 0.0744 | 10.44 | 99000 | 0.0496 | 0.0553 |
| 0.0702 | 10.49 | 99500 | 0.0496 | 0.0548 |
| 0.0802 | 10.54 | 100000 | 0.0499 | 0.0545 |
| 0.1605 | 10.59 | 100500 | 0.0477 | 0.0543 |
| 0.0768 | 10.65 | 101000 | 0.0487 | 0.0552 |
| 0.0833 | 10.7 | 101500 | 0.0495 | 0.0550 |
| 0.0782 | 10.75 | 102000 | 0.0479 | 0.0553 |
| 0.0813 | 10.8 | 102500 | 0.0490 | 0.0542 |
| 0.0712 | 10.86 | 103000 | 0.0485 | 0.0541 |
| 0.0703 | 10.91 | 103500 | 0.0486 | 0.0544 |
| 0.0765 | 10.96 | 104000 | 0.0480 | 0.0538 |
| 0.0796 | 11.02 | 104500 | 0.0486 | 0.0535 |
| 0.0778 | 11.07 | 105000 | 0.0492 | 0.0535 |
| 0.0735 | 11.12 | 105500 | 0.0494 | 0.0533 |
| 0.068 | 11.17 | 106000 | 0.0485 | 0.0528 |
| 0.0687 | 11.23 | 106500 | 0.0498 | 0.0534 |
| 0.0641 | 11.28 | 107000 | 0.0493 | 0.0534 |
| 0.0712 | 11.33 | 107500 | 0.0485 | 0.0526 |
| 0.0827 | 11.38 | 108000 | 0.0484 | 0.0530 |
| 0.0715 | 11.44 | 108500 | 0.0480 | 0.0533 |
| 0.0733 | 11.49 | 109000 | 0.0482 | 0.0532 |
| 0.0754 | 11.54 | 109500 | 0.0481 | 0.0537 |
| 0.0719 | 11.59 | 110000 | 0.0475 | 0.0533 |
| 0.0707 | 11.65 | 110500 | 0.0479 | 0.0536 |
| 0.0687 | 11.7 | 111000 | 0.0483 | 0.0535 |
| 0.0713 | 11.75 | 111500 | 0.0485 | 0.0535 |
| 0.0674 | 11.81 | 112000 | 0.0482 | 0.0537 |
| 0.0704 | 11.86 | 112500 | 0.0487 | 0.0537 |
| 0.0691 | 11.91 | 113000 | 0.0484 | 0.0541 |
| 0.0708 | 11.96 | 113500 | 0.0485 | 0.0548 |
| 0.0683 | 12.02 | 114000 | 0.0487 | 0.0541 |
| 0.0691 | 12.07 | 114500 | 0.0492 | 0.0540 |
| 0.0679 | 12.12 | 115000 | 0.0486 | 0.0540 |
| 0.073 | 12.17 | 115500 | 0.0479 | 0.0545 |
| 0.0647 | 12.23 | 116000 | 0.0484 | 0.0534 |
| 0.0663 | 12.28 | 116500 | 0.0484 | 0.0532 |
| 0.0687 | 12.33 | 117000 | 0.0483 | 0.0532 |
| 0.0696 | 12.39 | 117500 | 0.0482 | 0.0541 |
| 0.068 | 12.44 | 118000 | 0.0487 | 0.0531 |
| 0.0681 | 12.49 | 118500 | 0.0483 | 0.0530 |
| 0.0774 | 12.54 | 119000 | 0.0481 | 0.0533 |
| 0.0656 | 12.6 | 119500 | 0.0484 | 0.0529 |
| 0.0628 | 12.65 | 120000 | 0.0479 | 0.0533 |
| 0.0657 | 12.7 | 120500 | 0.0490 | 0.0538 |
| 0.0668 | 12.75 | 121000 | 0.0485 | 0.0533 |
| 0.0656 | 12.81 | 121500 | 0.0484 | 0.0531 |
| 0.0745 | 12.86 | 122000 | 0.0474 | 0.0526 |
| 0.0654 | 12.91 | 122500 | 0.0485 | 0.0528 |
| 0.0764 | 12.97 | 123000 | 0.0482 | 0.0529 |
| 0.0673 | 13.02 | 123500 | 0.0491 | 0.0526 |
| 0.0649 | 13.07 | 124000 | 0.0489 | 0.0527 |
| 0.0655 | 13.12 | 124500 | 0.0485 | 0.0520 |
| 0.0688 | 13.18 | 125000 | 0.0476 | 0.0524 |
| 0.0683 | 13.23 | 125500 | 0.0475 | 0.0523 |
| 0.0632 | 13.28 | 126000 | 0.0480 | 0.0528 |
| 0.063 | 13.33 | 126500 | 0.0483 | 0.0528 |
| 0.1418 | 13.39 | 127000 | 0.0464 | 0.0531 |
| 0.0693 | 13.44 | 127500 | 0.0473 | 0.0525 |
| 0.0696 | 13.49 | 128000 | 0.0477 | 0.0519 |
| 0.0644 | 13.54 | 128500 | 0.0477 | 0.0520 |
| 0.0625 | 13.6 | 129000 | 0.0480 | 0.0518 |
| 0.0682 | 13.65 | 129500 | 0.0471 | 0.0517 |
| 0.0698 | 13.7 | 130000 | 0.0480 | 0.0521 |
| 0.0643 | 13.76 | 130500 | 0.0482 | 0.0522 |
| 0.065 | 13.81 | 131000 | 0.0478 | 0.0521 |
| 0.0648 | 13.86 | 131500 | 0.0482 | 0.0519 |
| 0.0689 | 13.91 | 132000 | 0.0476 | 0.0520 |
| 0.0721 | 13.97 | 132500 | 0.0473 | 0.0523 |
| 0.0652 | 14.02 | 133000 | 0.0474 | 0.0519 |
| 0.0651 | 14.07 | 133500 | 0.0479 | 0.0519 |
| 0.0638 | 14.12 | 134000 | 0.0478 | 0.0520 |
| 0.0626 | 14.18 | 134500 | 0.0482 | 0.0519 |
| 0.0656 | 14.23 | 135000 | 0.0479 | 0.0521 |
| 0.0633 | 14.28 | 135500 | 0.0478 | 0.0519 |
| 0.0665 | 14.34 | 136000 | 0.0480 | 0.0519 |
| 0.0638 | 14.39 | 136500 | 0.0478 | 0.0517 |
| 0.0691 | 14.44 | 137000 | 0.0474 | 0.0515 |
| 0.0642 | 14.49 | 137500 | 0.0476 | 0.0514 |
| 0.0696 | 14.55 | 138000 | 0.0475 | 0.0515 |
| 0.0601 | 14.6 | 138500 | 0.0478 | 0.0515 |
| 0.0616 | 14.65 | 139000 | 0.0476 | 0.0515 |
| 0.0648 | 14.7 | 139500 | 0.0477 | 0.0516 |
| 0.0682 | 14.76 | 140000 | 0.0477 | 0.0515 |
| 0.0641 | 14.81 | 140500 | 0.0474 | 0.0515 |
| 0.0579 | 14.86 | 141000 | 0.0475 | 0.0514 |
| 0.0613 | 14.92 | 141500 | 0.0475 | 0.0514 |
| 0.0624 | 14.97 | 142000 | 0.0475 | 0.0514 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
philschmid/sagemaker-distilbert-emotion | 4d97496cfe9402866b5ac0339fbdfdb8050b67cd | 2022-06-23T15:13:15.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | philschmid | null | philschmid/sagemaker-distilbert-emotion | 8 | null | transformers | 13,333 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9185
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9185
verified: true
- name: Precision Macro
type: precision
value: 0.8869690559183302
verified: true
- name: Precision Micro
type: precision
value: 0.9185
verified: true
- name: Precision Weighted
type: precision
value: 0.9177420617963024
verified: true
- name: Recall Macro
type: recall
value: 0.8696773617395324
verified: true
- name: Recall Micro
type: recall
value: 0.9185
verified: true
- name: Recall Weighted
type: recall
value: 0.9185
verified: true
- name: F1 Macro
type: f1
value: 0.8772854847626651
verified: true
- name: F1 Micro
type: f1
value: 0.9185
verified: true
- name: F1 Weighted
type: f1
value: 0.9175578471721796
verified: true
- name: loss
type: loss
value: 0.24682247638702393
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2468
- Accuracy: 0.9185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9175 | 1.0 | 500 | 0.2468 | 0.9185 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
domischwimmbeck/bert-base-german-cased-fine-tuned-ner | 754708b8dbd7ce046136cdd7e25796d16778fd93 | 2022-05-05T08:07:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:germa_ner",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | domischwimmbeck | null | domischwimmbeck/bert-base-german-cased-fine-tuned-ner | 8 | null | transformers | 13,334 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- germa_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-fine-tuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: germa_ner
type: germa_ner
args: default
metrics:
- name: Precision
type: precision
value: 0.8089260808926081
- name: Recall
type: recall
value: 0.872836719337848
- name: F1
type: f1
value: 0.8396670285921101
- name: Accuracy
type: accuracy
value: 0.9748511630761677
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-fine-tuned-ner
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the germa_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0966
- Precision: 0.8089
- Recall: 0.8728
- F1: 0.8397
- Accuracy: 0.9749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.159 | 1.0 | 737 | 0.0922 | 0.7472 | 0.8461 | 0.7936 | 0.9703 |
| 0.0714 | 2.0 | 1474 | 0.0916 | 0.7886 | 0.8713 | 0.8279 | 0.9731 |
| 0.0319 | 3.0 | 2211 | 0.0966 | 0.8089 | 0.8728 | 0.8397 | 0.9749 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Nakul24/RoBERTa-emotion-classification | 54a7229a3e28f8894998a8d8396acc56237d382c | 2022-05-04T20:14:35.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | Nakul24 | null | Nakul24/RoBERTa-emotion-classification | 8 | null | transformers | 13,335 | Entry not found |
nid989/fewshot-learning-bart-base-paraphrase-finetuned-for-chunking | dcc2e9aac0623a4d0d03d399060b1f2a7f539fce | 2022-05-05T04:33:31.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | nid989 | null | nid989/fewshot-learning-bart-base-paraphrase-finetuned-for-chunking | 8 | null | transformers | 13,336 | ---
license: apache-2.0
---
|
Colorful/BureBERT | 1ffdcf63d8e03d1dd46deea076a69e5c71a08e6f | 2022-05-05T07:54:11.000Z | [
"pytorch",
"tf",
"roberta",
"text-classification",
"transformers",
"license:mit"
]
| text-classification | false | Colorful | null | Colorful/BureBERT | 8 | null | transformers | 13,337 | ---
license: mit
---
BureBERT is a pre-trained language model for bug reports. It can be fine-tuned on all kinds of bug report related tasks such as bug report summarization, duplicate bug report detection, bug priority prediction, etc. |
rdchambers/bert-finetuned-filler-2 | 206a9ff65aa9aaa3873eece7436dae52a01a6466 | 2022-05-05T20:55:51.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | rdchambers | null | rdchambers/bert-finetuned-filler-2 | 8 | null | transformers | 13,338 | Entry not found |
anuragshas/wav2vec2-xls-r-300m-bn-cv9-with-lm | f1bf37948da11ff3373fa3e90284b550554363b7 | 2022-05-10T16:17:38.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"bn",
"dataset:mozilla-foundation/common_voice_9_0",
"transformers",
"mozilla-foundation/common_voice_9_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-xls-r-300m-bn-cv9-with-lm | 8 | null | transformers | 13,339 | ---
language:
- bn
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_9_0
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_9_0
metrics:
- wer
model-index:
- name: XLS-R-300M - Bengali
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_9_0
name: Common Voice 9
args: bn
metrics:
- type: wer
value: 20.150
name: Test WER
- name: Test CER
type: cer
value: 4.813
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_9_0 - BN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2297
- Wer: 0.2850
- Cer: 0.0660
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 8692
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 3.675 | 2.3 | 400 | 3.5052 | 1.0 | 1.0 |
| 3.0446 | 4.6 | 800 | 2.2759 | 1.0052 | 0.5215 |
| 1.7276 | 6.9 | 1200 | 0.7083 | 0.6697 | 0.1969 |
| 1.5171 | 9.2 | 1600 | 0.5328 | 0.5733 | 0.1568 |
| 1.4176 | 11.49 | 2000 | 0.4571 | 0.5161 | 0.1381 |
| 1.343 | 13.79 | 2400 | 0.3910 | 0.4522 | 0.1160 |
| 1.2743 | 16.09 | 2800 | 0.3534 | 0.4137 | 0.1044 |
| 1.2396 | 18.39 | 3200 | 0.3278 | 0.3877 | 0.0959 |
| 1.2035 | 20.69 | 3600 | 0.3109 | 0.3741 | 0.0917 |
| 1.1745 | 22.99 | 4000 | 0.2972 | 0.3618 | 0.0882 |
| 1.1541 | 25.29 | 4400 | 0.2836 | 0.3427 | 0.0832 |
| 1.1372 | 27.59 | 4800 | 0.2759 | 0.3357 | 0.0812 |
| 1.1048 | 29.89 | 5200 | 0.2669 | 0.3284 | 0.0783 |
| 1.0966 | 32.18 | 5600 | 0.2678 | 0.3249 | 0.0775 |
| 1.0747 | 34.48 | 6000 | 0.2547 | 0.3134 | 0.0748 |
| 1.0593 | 36.78 | 6400 | 0.2491 | 0.3077 | 0.0728 |
| 1.0417 | 39.08 | 6800 | 0.2450 | 0.3012 | 0.0711 |
| 1.024 | 41.38 | 7200 | 0.2402 | 0.2956 | 0.0694 |
| 1.0106 | 43.68 | 7600 | 0.2351 | 0.2915 | 0.0681 |
| 1.0014 | 45.98 | 8000 | 0.2328 | 0.2896 | 0.0673 |
| 0.9999 | 48.28 | 8400 | 0.2318 | 0.2866 | 0.0667 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.1.1.dev0
- Tokenizers 0.12.1
|
okho0653/distilbert-base-uncased-finetuned-sst-2-english-zero-shot-sentiment-model | 645ff7be05d39d708e8173b5988e5ae6b0d2ba72 | 2022-05-06T05:20:46.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | okho0653 | null | okho0653/distilbert-base-uncased-finetuned-sst-2-english-zero-shot-sentiment-model | 8 | null | transformers | 13,340 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-sst-2-english-zero-shot-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst-2-english-zero-shot-sentiment-model
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
avuhong/ESM1b_AAV2_classification | 9fb74dccf1c873db79fcec73b19deaf9e84f65f3 | 2022-05-08T13:48:05.000Z | [
"pytorch",
"tensorboard",
"esm",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | avuhong | null | avuhong/ESM1b_AAV2_classification | 8 | null | transformers | 13,341 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: ESM1b_AAV2_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ESM1b_AAV2_classification
To load tokenizer from ESM, you need to install transformers with this version as follow:
!git clone -b add_esm-proper --single-branch https://github.com/liujas000/transformers.git
!pip -q install ./transformers
This model is a fine-tuned version of [facebook/esm-1b](https://huggingface.co/facebook/esm-1b) on AAV2 dataset with ~230k sequences (Bryant et al 2020).
The WT sequence (aa561-588): D E E E I R T T N P V A T E Q Y G S V S T N L Q R G N R
Maximum length: 50
It achieves the following results on the evaluation set.
Note:this is result of the last epoch, I think the pushed model is loaded with best checkpoint - best val_loss, I'm not so sure though :/
- Loss: 0.2250
- Accuracy: 0.9620
- F1: 0.9632
- Precision: 0.9642
- Recall: 0.9622
- Auroc: 0.9620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Auroc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| No log | 1.0 | 232 | 0.1311 | 0.9495 | 0.9501 | 0.9711 | 0.9299 | 0.9502 |
| No log | 2.0 | 464 | 0.1032 | 0.9606 | 0.9620 | 0.9583 | 0.9657 | 0.9604 |
| 0.1924 | 3.0 | 696 | 0.0995 | 0.9627 | 0.9641 | 0.9584 | 0.9700 | 0.9625 |
| 0.1924 | 4.0 | 928 | 0.1218 | 0.9611 | 0.9624 | 0.9607 | 0.9641 | 0.9610 |
| 0.067 | 5.0 | 1160 | 0.1187 | 0.9622 | 0.9633 | 0.9678 | 0.9588 | 0.9623 |
| 0.067 | 6.0 | 1392 | 0.1514 | 0.9612 | 0.9621 | 0.9710 | 0.9534 | 0.9615 |
| 0.0271 | 7.0 | 1624 | 0.1890 | 0.9612 | 0.9626 | 0.9580 | 0.9673 | 0.9610 |
| 0.0271 | 8.0 | 1856 | 0.2250 | 0.9620 | 0.9632 | 0.9642 | 0.9622 | 0.9620 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.10.3
|
theojolliffe/distilbart-cnn-12-6-finetuned-arxiv | d5385f8c4c7c89caa97dd05aa59f2d6c987f8834 | 2022-05-07T17:23:21.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:scientific_papers",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | theojolliffe | null | theojolliffe/distilbart-cnn-12-6-finetuned-arxiv | 8 | null | transformers | 13,342 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- scientific_papers
metrics:
- rouge
model-index:
- name: distilbart-cnn-12-6-finetuned-arxiv
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: scientific_papers
type: scientific_papers
args: arxiv
metrics:
- name: Rouge1
type: rouge
value: 40.0881
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-finetuned-arxiv
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on the scientific_papers dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5467
- Rouge1: 40.0881
- Rouge2: 14.5466
- Rougel: 23.3775
- Rougelsum: 35.8672
- Gen Len: 122.4665
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.6567 | 1.0 | 12690 | 2.5467 | 40.0881 | 14.5466 | 23.3775 | 35.8672 | 122.4665 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
nirajsaran/AdTextGeneration | 5bbb6d88a72f875dba06be220ac38ab44a753ba8 | 2022-05-10T19:00:48.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"license:mit"
]
| text-generation | false | nirajsaran | null | nirajsaran/AdTextGeneration | 8 | null | transformers | 13,343 | ---
license: mit
inference:
parameters:
temperature: 0.7
use_cache: false
max_length: 200
top_k: 5
top_p: 0.9
widget:
- text: "Sony TV"
example_title: "Amazon Ad text Electronics"
- text: "Apple Watch"
example_title: "Amazon Ad text Wearables"
- text: "Last minute shopping for Samsung headphones for"
example_title: "Ads for shopping deals"
- text: "Labor Day discounts for"
example_title: "Ads for Holiday deals"
metrics:
- bleu
- sacrebleu
---
Generates Ad copy, currently for ads for Amazon shopping (fine tuned for electronics and wearables).
**Usage Examples:**
Enter the bolded text below to get the Amazon ad generated by the model.
**Big savings on the new** Roku Streaming Device
**Mothers Day discounts for** Apple Watch Wireless Charger USB Charging Cable
**Big savings on the new Sony**
**Last minute shopping for Samsung headphones for**
You can try entering brand and product names like Samsung Galaxy to see the ad text generator in action.
Currently fine tuned on the EleutherAI/gpt-neo-125M model
**Model Performance:**
The model does quite well on the Electronics and Wearables categories on which it has been fine-tuned. There are, however, occasional hallucinations, though the ad copy is mostly coherent.
In other domains, it doesn't do quite as well...
Tesla for Christmas today,
Honda on sale
|
Jeevesh8/bert_ft_qqp-1 | eaf6969ec3e0f838e7a713e9e50afd8787cf92f9 | 2022-05-09T09:32:40.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-1 | 8 | null | transformers | 13,344 | Entry not found |
Jeevesh8/bert_ft_qqp-2 | d37657184c4bf7ce5bd737dcd44050d87076caa2 | 2022-05-09T09:35:13.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-2 | 8 | null | transformers | 13,345 | Entry not found |
Jeevesh8/bert_ft_qqp-3 | 3287858de162f75accb49405f90ec67c7f2bab78 | 2022-05-09T09:37:44.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-3 | 8 | null | transformers | 13,346 | Entry not found |
Jeevesh8/bert_ft_qqp-4 | bdf13db16c4bbf09af182d8f1ca33abc4cb89c13 | 2022-05-09T09:40:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-4 | 8 | null | transformers | 13,347 | Entry not found |
Jeevesh8/bert_ft_qqp-6 | f6d23728f8e1a156237109eaff645840ea700003 | 2022-05-09T09:45:22.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-6 | 8 | null | transformers | 13,348 | Entry not found |
Jeevesh8/bert_ft_qqp-7 | 36c0e83cb23c1f192099e06a6b88902d3abdead9 | 2022-05-09T09:47:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-7 | 8 | null | transformers | 13,349 | Entry not found |
Jeevesh8/bert_ft_qqp-9 | 7b80e7abe8ac9353c7554ecfc6067952158467ac | 2022-05-09T09:53:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-9 | 8 | null | transformers | 13,350 | Entry not found |
Jeevesh8/bert_ft_qqp-10 | 2d57bfedc260f247dbb6be35f50163d88d41b212 | 2022-05-09T09:55:37.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-10 | 8 | null | transformers | 13,351 | Entry not found |
Jeevesh8/bert_ft_qqp-11 | f94f176d7b36d21923150a7626d3c6c34c3bc56b | 2022-05-09T09:58:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-11 | 8 | null | transformers | 13,352 | Entry not found |
Jeevesh8/bert_ft_qqp-12 | e48592e5e709bc81ca13314388727ddc9821a552 | 2022-05-09T10:00:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-12 | 8 | null | transformers | 13,353 | Entry not found |
Jeevesh8/bert_ft_qqp-13 | ac8046923488859171e26158eac5dd211fedeb72 | 2022-05-09T10:03:19.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-13 | 8 | null | transformers | 13,354 | Entry not found |
Jeevesh8/bert_ft_qqp-14 | 405a4d7d72b8f86eead3eef03add2de8486ee965 | 2022-05-09T10:05:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-14 | 8 | null | transformers | 13,355 | Entry not found |
Jeevesh8/bert_ft_qqp-17 | e9a0ced9820211b8068cc946add00c244900620a | 2022-05-09T10:13:37.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-17 | 8 | null | transformers | 13,356 | Entry not found |
Jeevesh8/bert_ft_qqp-18 | b8d31e151dbaa50584b8fe6226d65410e058c70b | 2022-05-09T10:16:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-18 | 8 | null | transformers | 13,357 | Entry not found |
Jeevesh8/bert_ft_qqp-19 | 0553d4cb4ba11b12bb7316573eab850143b48dd5 | 2022-05-09T10:18:46.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-19 | 8 | null | transformers | 13,358 | Entry not found |
Jeevesh8/bert_ft_qqp-20 | 7e022bebd7dcd873485b5e17e0c8982647ec2e44 | 2022-05-09T10:21:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-20 | 8 | null | transformers | 13,359 | Entry not found |
Jeevesh8/bert_ft_qqp-22 | 6bd6ba408f9c3098351a45a675026486cf4d6b44 | 2022-05-09T10:26:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-22 | 8 | null | transformers | 13,360 | Entry not found |
Jeevesh8/bert_ft_qqp-23 | b0a4c29f00adfd7d65803ac05eb0096ca99c8099 | 2022-05-09T10:28:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-23 | 8 | null | transformers | 13,361 | Entry not found |
Jeevesh8/bert_ft_qqp-24 | 6e995362c0deca80c09147cd801c9f2c06ad7e54 | 2022-05-09T10:31:37.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-24 | 8 | null | transformers | 13,362 | Entry not found |
Jeevesh8/bert_ft_qqp-25 | 68685a1670d042c5a1b4a658d8b4e92c6b6f395c | 2022-05-09T10:34:13.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-25 | 8 | null | transformers | 13,363 | Entry not found |
Jeevesh8/bert_ft_qqp-27 | 33e709258ac64198855c6139a5a7fbd84c6b9c24 | 2022-05-09T10:39:17.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-27 | 8 | null | transformers | 13,364 | Entry not found |
Jeevesh8/bert_ft_qqp-28 | 8ce8833bf2e1b1373ae072284289debdc5c23005 | 2022-05-09T10:41:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-28 | 8 | null | transformers | 13,365 | Entry not found |
Jeevesh8/bert_ft_qqp-29 | 0dcef2e0d05c40237517985713de54b0e98f370e | 2022-05-09T10:44:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-29 | 8 | null | transformers | 13,366 | Entry not found |
Jeevesh8/bert_ft_qqp-31 | 22a6483b218ff2b1063338ce520c5860998bfe5d | 2022-05-09T10:49:26.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-31 | 8 | null | transformers | 13,367 | Entry not found |
Jeevesh8/bert_ft_qqp-34 | 9904e1c7fa3f486a65e77ebc11dfe6952b07e9ae | 2022-05-09T10:57:14.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-34 | 8 | null | transformers | 13,368 | Entry not found |
Jeevesh8/bert_ft_qqp-35 | 93069b8030222e96aa60011f8bfac4312d47d4e1 | 2022-05-09T10:59:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-35 | 8 | null | transformers | 13,369 | Entry not found |
Jeevesh8/bert_ft_qqp-36 | cce021f6014cb926f332826a98ae6a21ae907ff0 | 2022-05-09T11:02:20.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-36 | 8 | null | transformers | 13,370 | Entry not found |
Jeevesh8/bert_ft_qqp-37 | 7782999cad248cad1514057bd933355fa2e19422 | 2022-05-09T11:04:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-37 | 8 | null | transformers | 13,371 | Entry not found |
Jeevesh8/bert_ft_qqp-38 | a654927511c38c791eb22ab2f47e6a9ed83ffce2 | 2022-05-09T11:07:30.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-38 | 8 | null | transformers | 13,372 | Entry not found |
Jeevesh8/bert_ft_qqp-39 | 95891883ad62513ce18abbc87274a32f3e58fb96 | 2022-05-09T11:10:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-39 | 8 | null | transformers | 13,373 | Entry not found |
Jeevesh8/bert_ft_qqp-40 | 7d08f4b71e9fa9ed584cdaa0cda0ed8734ad3905 | 2022-05-09T11:12:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-40 | 8 | null | transformers | 13,374 | Entry not found |
Jeevesh8/bert_ft_qqp-41 | 7e10284da7dffe9cdb0507e6ee86b99bb932b650 | 2022-05-09T11:15:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-41 | 8 | null | transformers | 13,375 | Entry not found |
Jeevesh8/bert_ft_qqp-42 | d556c400d7b4a10c9d05994cf0dd83ba5474d352 | 2022-05-09T11:17:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-42 | 8 | null | transformers | 13,376 | Entry not found |
Jeevesh8/bert_ft_qqp-44 | 9dde3e25495601750fb76f7434ceb3e092e5377f | 2022-05-09T11:22:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-44 | 8 | null | transformers | 13,377 | Entry not found |
Jeevesh8/bert_ft_qqp-45 | e1a65be9ad3263eea6ceaf71501841209986aa17 | 2022-05-09T11:25:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-45 | 8 | null | transformers | 13,378 | Entry not found |
Jeevesh8/bert_ft_qqp-46 | 505c622bdc1055cd4bab8ffb8b468f4c7e8f596f | 2022-05-09T11:27:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-46 | 8 | null | transformers | 13,379 | Entry not found |
Jeevesh8/bert_ft_qqp-47 | d959ea1069bcadcc32f6af557fd08e4fb32651bf | 2022-05-09T11:30:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-47 | 8 | null | transformers | 13,380 | Entry not found |
Jeevesh8/bert_ft_qqp-48 | 97eb8670de6b62228bf476002513e6ec8efef7ed | 2022-05-09T11:32:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-48 | 8 | null | transformers | 13,381 | Entry not found |
Jeevesh8/bert_ft_qqp-49 | 1eafe761ee61392fe2c9df057ed36505cc6af97c | 2022-05-09T11:35:16.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-49 | 8 | null | transformers | 13,382 | Entry not found |
Jeevesh8/bert_ft_qqp-50 | 5952ba88828872b802c21bbc56635dddeffcdc22 | 2022-05-09T11:37:52.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-50 | 8 | null | transformers | 13,383 | Entry not found |
Jeevesh8/bert_ft_qqp-52 | af2036726d43a325f18950cf6230c3dcb18c2f55 | 2022-05-09T11:43:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-52 | 8 | null | transformers | 13,384 | Entry not found |
Theimisa/distilbert-base-uncased-aisera_texts-v3 | ba96f540f5ca557aff68708741e9ec3e2d4deaa6 | 2022-05-10T07:49:12.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | Theimisa | null | Theimisa/distilbert-base-uncased-aisera_texts-v3 | 8 | null | transformers | 13,385 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-aisera_texts-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-aisera_texts-v3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8106
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0183 | 1.0 | 3875 | 1.8913 |
| 1.9018 | 2.0 | 7750 | 1.8106 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Jeevesh8/bert_ft_qqp-53 | 3df6f31542c3bbcebad6c487429e83530f4b108d | 2022-05-09T11:45:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-53 | 8 | null | transformers | 13,386 | Entry not found |
Jeevesh8/bert_ft_qqp-54 | 33329954443ed758d484ef3ed2d5c242a6111f39 | 2022-05-09T11:48:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-54 | 8 | null | transformers | 13,387 | Entry not found |
Jeevesh8/bert_ft_qqp-55 | 39e97ea1771058d7cefa68478c7d089a3aa4d410 | 2022-05-09T11:50:38.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-55 | 8 | null | transformers | 13,388 | Entry not found |
Jeevesh8/bert_ft_qqp-56 | d1484de3f8a7c93a27965f4f23363140415abe80 | 2022-05-09T11:53:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-56 | 8 | null | transformers | 13,389 | Entry not found |
Jeevesh8/bert_ft_qqp-57 | 13d1936482ed322259dc648837e35c8e2a05e6e5 | 2022-05-09T11:55:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-57 | 8 | null | transformers | 13,390 | Entry not found |
Jeevesh8/bert_ft_qqp-58 | bfafc9e4a730f266d036599780ced2062092681d | 2022-05-09T11:58:20.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-58 | 8 | null | transformers | 13,391 | Entry not found |
Jeevesh8/bert_ft_qqp-59 | 404baf5dd85ed8c63baa091fd0eaac72d2fb1291 | 2022-05-09T12:00:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-59 | 8 | null | transformers | 13,392 | Entry not found |
Jeevesh8/bert_ft_qqp-60 | 2330f1cad715e58aa1b94e9ab298ae215e2eccef | 2022-05-09T12:03:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-60 | 8 | null | transformers | 13,393 | Entry not found |
Jeevesh8/bert_ft_qqp-61 | c997c4a8586134cc403643b9535da8c71911ab6d | 2022-05-09T12:06:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-61 | 8 | null | transformers | 13,394 | Entry not found |
Jeevesh8/bert_ft_qqp-62 | f80411d500dcc8265ba804022c057c73818e4c10 | 2022-05-09T12:08:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-62 | 8 | null | transformers | 13,395 | Entry not found |
Jeevesh8/bert_ft_qqp-63 | 821d6a4ecd5745c44d5029c5a59cb41636753588 | 2022-05-09T12:11:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-63 | 8 | null | transformers | 13,396 | Entry not found |
Jeevesh8/bert_ft_qqp-64 | 4e334de940eeb75b09fc98881ae8638c55210af8 | 2022-05-09T12:13:40.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-64 | 8 | null | transformers | 13,397 | Entry not found |
Jeevesh8/bert_ft_qqp-65 | ee9aa6e7459069cd4f92c0ef715df4810d8fd58f | 2022-05-09T12:16:13.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-65 | 8 | null | transformers | 13,398 | Entry not found |
Jeevesh8/bert_ft_qqp-66 | ec6278ff4b51446d5e0cae19abcb72c8dd55e8a7 | 2022-05-09T12:18:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-66 | 8 | null | transformers | 13,399 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.