| modelId
				 string | author
				 string | last_modified
				 timestamp[us, tz=UTC] | downloads
				 int64 | likes
				 int64 | library_name
				 string | tags
				 list | pipeline_tag
				 string | createdAt
				 timestamp[us, tz=UTC] | card
				 string | 
|---|---|---|---|---|---|---|---|---|---|
| 
	edoumazane/distilbert-base-uncased-finetuned-ner | 
	edoumazane | 2022-03-22T09:56:14Z | 7 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "distilbert",
  "token-classification",
  "generated_from_trainer",
  "dataset:conll2003",
  "license:apache-2.0",
  "model-index",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	token-classification | 2022-03-22T09:27:52Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
  results:
  - task:
      name: Token Classification
      type: token-classification
    dataset:
      name: conll2003
      type: conll2003
      args: conll2003
    metrics:
    - name: Precision
      type: precision
      value: 0.9247134038800705
    - name: Recall
      type: recall
      value: 0.9384718648618414
    - name: F1
      type: f1
      value: 0.9315418355449449
    - name: Accuracy
      type: accuracy
      value: 0.9836529143565221
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0612
- Precision: 0.9247
- Recall: 0.9385
- F1: 0.9315
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1     | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2421        | 1.0   | 878  | 0.0701          | 0.9083    | 0.9217 | 0.9149 | 0.9801   |
| 0.0555        | 2.0   | 1756 | 0.0599          | 0.9204    | 0.9357 | 0.9280 | 0.9830   |
| 0.0311        | 3.0   | 2634 | 0.0612          | 0.9247    | 0.9385 | 0.9315 | 0.9837   |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	celine98/canine-s-finetuned-sst2 | 
	celine98 | 2022-03-22T09:47:45Z | 4 | 2 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "canine",
  "text-classification",
  "generated_from_trainer",
  "dataset:glue",
  "license:apache-2.0",
  "model-index",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2022-03-21T22:35:16Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: canine-s-finetuned-sst2
  results:
  - task:
      name: Text Classification
      type: text-classification
    dataset:
      name: glue
      type: glue
      args: sst2
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.8577981651376146
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canine-s-finetuned-sst2
This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5259
- Accuracy: 0.8578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step  | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3524        | 1.0   | 4210  | 0.4762          | 0.8257   |
| 0.2398        | 2.0   | 8420  | 0.4169          | 0.8567   |
| 0.1797        | 3.0   | 12630 | 0.5259          | 0.8578   |
| 0.152         | 4.0   | 16840 | 0.5996          | 0.8532   |
| 0.1026        | 5.0   | 21050 | 0.6676          | 0.8578   |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	merve/anime-faces-generator | 
	merve | 2022-03-22T09:15:31Z | 0 | 2 | 
	keras | 
	[
  "keras",
  "tf-keras",
  "dcgan",
  "dataset:merve/anime-faces",
  "license:apache-2.0",
  "region:us"
] | null | 2022-03-04T16:41:30Z | 
	---
license: apache-2.0
library_name: keras 
tags:
- dcgan 
datasets:
- merve/anime-faces
---
## Model description
Anime face generator model using [TensorFlow DCGAN example](https://www.tensorflow.org/tutorials/generative/dcgan).
## Training and evaluation data
Model is trained on [anime faces dataset](https://huggingface.co/datasets/merve/anime-faces). The dataset consists of 21551 anime faces scraped from www.getchu.com, which are then cropped using the anime face detection algorithm [here](https://github.com/nagadomi/lbpcascade_animeface). All images are resized to 64 * 64 for the sake of convenience. The model takes a noise as input and then Conv2DTranspose is used to do upsampling. If you want to pass this to another discriminator, the output shape consists of 28x28 images.
## How to use this model
You can use this model to generate new anime faces. If you want to continuously train, use with [discriminator](https://huggingface.co/merve/anime-faces-discriminator) using `tf.GradientTape()` as mentioned in the DCGAN tutorial.
```
from huggingface_hub import from_pretrained_keras
model = from_pretrained_keras("merve/anime-faces-generator")
```
You can generate examples using a noise.
```
seed = tf.random.normal([number_of_examples_to_generate, noise])
predictions = model(seed, training=False) # inference mode
```
## Intended use and biases
This model is not intended for production.
### Generated images 
 | 
| 
	Yaxin/xlm-roberta-base-conll2003-ner | 
	Yaxin | 2022-03-22T08:11:52Z | 81 | 3 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "xlm-roberta",
  "token-classification",
  "generated_from_trainer",
  "dataset:conll2003",
  "license:mit",
  "model-index",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	token-classification | 2022-03-22T07:36:34Z | 
	---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: test-conll2003-ner
  results:
  - task:
      name: Token Classification
      type: token-classification
    dataset:
      name: conll2003
      type: conll2003
      args: conll2003
    metrics:
    - name: Precision
      type: precision
      value: 0.9459188783174762
    - name: Recall
      type: recall
      value: 0.9537192864355436
    - name: F1
      type: f1
      value: 0.94980306712478
    - name: Accuracy
      type: accuracy
      value: 0.9911218410498034
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-conll2003-ner
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0470
- Precision: 0.9459
- Recall: 0.9537
- F1: 0.9498
- Accuracy: 0.9911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0
- Datasets 1.18.3
- Tokenizers 0.11.0
 | 
| 
	lazyturtl/WEC-types | 
	lazyturtl | 2022-03-22T04:54:04Z | 60 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "vit",
  "image-classification",
  "huggingpics",
  "model-index",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	image-classification | 2022-03-22T04:53:55Z | 
	---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: WEC-types
  results:
  - task:
      name: Image Classification
      type: image-classification
    metrics:
      - name: Accuracy
        type: accuracy
        value: 0.7830188870429993
---
# WEC-types
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Attenuators

#### Oscillating water column

#### Overtopping Devices

#### Point Absorber
 | 
| 
	razent/SciFive-large-Pubmed_PMC-MedNLI | 
	razent | 2022-03-22T04:05:21Z | 675 | 2 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tf",
  "t5",
  "text2text-generation",
  "mednli",
  "en",
  "dataset:pubmed",
  "dataset:pmc/open_access",
  "arxiv:2106.03598",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text2text-generation | 2022-03-20T17:24:33Z | 
	---
language: 
  - en
tags:
- text2text-generation
- mednli
datasets:
- pubmed
- pmc/open_access
widget:
- text: "mednli: sentence1: In the ED, initial VS revealed T 98.9, HR 73, BP 121/90, RR 15, O2 sat 98% on RA. sentence2: The patient is hemodynamically stable"
---
# SciFive Pubmed+PMC Large on MedNLI
## Introduction
Finetuned SciFive Pubmed+PMC Large model achieved state-of-the-art results on [MedNLI (Medical Natural Language Inference)](https://paperswithcode.com/sota/natural-language-inference-on-mednli)
Paper: [SciFive: a text-to-text transformer model for biomedical literature](https://arxiv.org/abs/2106.03598)
Authors: _Long N. Phan, James T. Anibal, Hieu Tran, Shaurya Chanana, Erol Bahadroglu, Alec Peltekian, Grégoire Altan-Bonnet_
## How to use
For more details, do check out [our Github repo](https://github.com/justinphan3110/SciFive). 
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("razent/SciFive-large-Pubmed_PMC-MedNLI")  
model = AutoModelForSeq2SeqLM.from_pretrained("razent/SciFive-large-Pubmed_PMC-MedNLI")
model.cuda()
sent_1 = "In the ED, initial VS revealed T 98.9, HR 73, BP 121/90, RR 15, O2 sat 98% on RA."
sent_2 = "The patient is hemodynamically stable"
text =  f"mednli: sentence1: {sent_1} sentence2: {sent_2}"
encoding = tokenizer.encode_plus(text, padding='max_length', max_length=256, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda")
outputs = model.generate(
    input_ids=input_ids, attention_mask=attention_masks,
    max_length=8,
    early_stopping=True
)
for output in outputs:
    line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True)
    print(line)
``` | 
| 
	mimicheng/codeparrot-ds | 
	mimicheng | 2022-03-22T03:45:36Z | 3 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "gpt2",
  "text-generation",
  "generated_from_trainer",
  "license:mit",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-21T19:59:48Z | 
	---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.7397
- eval_runtime: 603.8598
- eval_samples_per_second: 154.281
- eval_steps_per_second: 4.822
- epoch: 0.08
- step: 5000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_AugmentedTransfer_ES | 
	StivenLancheros | 2022-03-21T22:36:06Z | 11 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "roberta",
  "token-classification",
  "generated_from_trainer",
  "license:apache-2.0",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	token-classification | 2022-03-21T22:05:55Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_AugmentedTransfer_ES
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_AugmentedTransfer_ES
This model is a fine-tuned version of [StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES](https://huggingface.co/StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2043
- Precision: 0.8666
- Recall: 0.8614
- F1: 0.8639
- Accuracy: 0.9734
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in Spanish (MT translated) and English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Three datasets (original, augmented, MT translated CRAFT) were concatenated. To improve F1 score the transfer learning was completed in two steps.
Using [StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES](https://huggingface.co/StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES) as a base model, I finetuned once more on the original CRAFT dataset in English.
Biobert --> Augmented CRAFT --> CRAFT ES (MT translated)
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1     | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0088        | 1.0   | 1360 | 0.1793          | 0.8616    | 0.8487 | 0.8551 | 0.9721   |
| 0.0046        | 2.0   | 2720 | 0.1925          | 0.8618    | 0.8426 | 0.8521 | 0.9713   |
| 0.0032        | 3.0   | 4080 | 0.1926          | 0.8558    | 0.8630 | 0.8594 | 0.9725   |
| 0.0011        | 4.0   | 5440 | 0.2043          | 0.8666    | 0.8614 | 0.8639 | 0.9734   |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_AugmentedTransfer_EN | 
	StivenLancheros | 2022-03-21T22:10:39Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "roberta",
  "token-classification",
  "generated_from_trainer",
  "license:apache-2.0",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	token-classification | 2022-03-21T21:04:02Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_AugmentedTransfer_EN
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_AugmentedTransfer_EN
This model is a fine-tuned version of [StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_EN](https://huggingface.co/StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_EN) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2308
- Precision: 0.8366
- Recall: 0.8513
- F1: 0.8439
- Accuracy: 0.9681
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in Spanish and English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical. This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Both datasets (original, augmented) were concatenated. To improve F1 score the transfer learning was completed in two steps. Using [StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_EN](https://huggingface.co/StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_EN as a base model, I finetuned once more on the original CRAFT dataset in English.
Biobert --> Augmented CRAFT --> CRAFT
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1     | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0129        | 1.0   | 1360 | 0.2119          | 0.8404    | 0.8364 | 0.8384 | 0.9666   |
| 0.0072        | 2.0   | 2720 | 0.2132          | 0.8173    | 0.8583 | 0.8373 | 0.9662   |
| 0.0042        | 3.0   | 4080 | 0.2180          | 0.8410    | 0.8515 | 0.8462 | 0.9686   |
| 0.0019        | 4.0   | 5440 | 0.2308          | 0.8366    | 0.8513 | 0.8439 | 0.9681   |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	huggingtweets/elonmusk-garyvee | 
	huggingtweets | 2022-03-21T19:57:10Z | 4 | 1 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "gpt2",
  "text-generation",
  "huggingtweets",
  "en",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-21T19:55:22Z | 
	---
language: en
thumbnail: http://www.huggingtweets.com/elonmusk-garyvee/1647892564866/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
    <div class="flex">
        <div
			style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1503591435324563456/foUrqiEw_400x400.jpg')">
        </div>
        <div
            style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1493524673962852353/qRxbC9Xq_400x400.jpg')">
        </div>
        <div
            style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
        </div>
    </div>
    <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
    <div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Gary Vaynerchuk</div>
    <div style="text-align: center; font-size: 14px;">@elonmusk-garyvee</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Gary Vaynerchuk.
| Data | Elon Musk | Gary Vaynerchuk |
| --- | --- | --- |
| Tweets downloaded | 2200 | 3247 |
| Retweets | 102 | 712 |
| Short tweets | 671 | 842 |
| Tweets kept | 1427 | 1693 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/abt9l46e/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-garyvee's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/4wls4y5v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/4wls4y5v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
                     model='huggingtweets/elonmusk-garyvee')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
 | 
| 
	Ameer05/distilbart-cnn-12-6-finetuned-resume-summarizer | 
	Ameer05 | 2022-03-21T19:35:06Z | 17 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "bart",
  "text2text-generation",
  "summarization",
  "generated_from_trainer",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	summarization | 2022-03-21T19:18:43Z | 
	---
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-cnn-12-6-finetuned-resume-summarizer
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-finetuned-resume-summarizer
This model is a fine-tuned version of [Ameer05/model-tokenizer-repo](https://huggingface.co/Ameer05/model-tokenizer-repo) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1123
- Rouge1: 52.5826
- Rouge2: 34.3861
- Rougel: 41.8525
- Rougelsum: 51.0015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1  | Rouge2  | Rougel  | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log        | 0.91  | 5    | 3.2243          | 42.8593 | 24.8652 | 34.1789 | 41.406    |
| No log        | 1.91  | 10   | 2.6948          | 48.8571 | 28.6711 | 39.2648 | 46.188    |
| No log        | 2.91  | 15   | 2.4665          | 50.6085 | 30.4034 | 39.7406 | 48.5449   |
| No log        | 3.91  | 20   | 2.3329          | 52.2357 | 32.3398 | 41.574  | 49.4316   |
| 3.6611        | 4.91  | 25   | 2.2362          | 52.0134 | 33.1612 | 41.3103 | 50.255    |
| 3.6611        | 5.91  | 30   | 2.1833          | 51.5434 | 32.7045 | 40.5683 | 49.4238   |
| 3.6611        | 6.91  | 35   | 2.1462          | 53.5144 | 35.4518 | 42.8615 | 51.4053   |
| 3.6611        | 7.91  | 40   | 2.1518          | 52.0985 | 33.6754 | 41.5936 | 50.5159   |
| 2.0326        | 8.91  | 45   | 2.1075          | 53.1401 | 34.9721 | 42.2973 | 51.8454   |
| 2.0326        | 9.91  | 50   | 2.1123          | 52.5826 | 34.3861 | 41.8525 | 51.0015   |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
 | 
| 
	huggingtweets/rebeudeter | 
	huggingtweets | 2022-03-21T17:55:17Z | 3 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "gpt2",
  "text-generation",
  "huggingtweets",
  "en",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-21T17:55:08Z | 
	---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
    <div class="flex">
        <div
			style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1421289007753859077/3X1VHMRx_400x400.jpg')">
        </div>
        <div
            style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
        </div>
        <div
            style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
        </div>
    </div>
    <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
    <div style="text-align: center; font-size: 16px; font-weight: 800">Billy ☄️🧡</div>
    <div style="text-align: center; font-size: 14px;">@rebeudeter</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Billy ☄️🧡.
| Data | Billy ☄️🧡 |
| --- | --- |
| Tweets downloaded | 3220 |
| Retweets | 363 |
| Short tweets | 205 |
| Tweets kept | 2652 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3mz5i9lj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rebeudeter's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1qau529e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1qau529e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
                     model='huggingtweets/rebeudeter')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
 | 
| 
	ianMconversica/autonlp-test-654919306 | 
	ianMconversica | 2022-03-21T17:29:34Z | 5 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "t5",
  "text2text-generation",
  "autonlp",
  "unk",
  "dataset:McIan91/autonlp-data-test",
  "co2_eq_emissions",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text2text-generation | 2022-03-21T17:28:50Z | 
	---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- McIan91/autonlp-data-test
co2_eq_emissions: 0.7013851565380207
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 654919306
- CO2 Emissions (in grams): 0.7013851565380207
## Validation Metrics
- Loss: 2.5570242404937744
- Rouge1: 72.7273
- Rouge2: 44.4444
- RougeL: 72.7273
- RougeLsum: 72.7273
- Gen Len: 17.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/McIan91/autonlp-test-654919306
``` | 
| 
	qahq/CL-AraBERTv0.1-base | 
	qahq | 2022-03-21T16:04:15Z | 11 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "bert",
  "fill-mask",
  "license:apache-2.0",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	fill-mask | 2022-03-14T00:26:07Z | 
	---
license: apache-2.0
---
 | 
| 
	espnet/aaf_openslr57 | 
	espnet | 2022-03-21T14:36:37Z | 1 | 0 | 
	espnet | 
	[
  "espnet",
  "audio",
  "automatic-speech-recognition",
  "fr",
  "dataset:openslr",
  "arxiv:1804.00015",
  "region:us"
] | 
	automatic-speech-recognition | 2022-03-21T04:58:18Z | 
	---
tags:
- espnet
- audio
- automatic-speech-recognition
language: fr
datasets:
- openslr
---
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
  author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
  title={{ESPnet}: End-to-End Speech Processing Toolkit},
  year={2018},
  booktitle={Proceedings of Interspeech},
  pages={2207--2211},
  doi={10.21437/Interspeech.2018-1456},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
  title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
  author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
  booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  pages={7654--7658},
  year={2020},
  organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
      title={ESPnet: End-to-End Speech Processing Toolkit}, 
      author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
      year={2018},
      eprint={1804.00015},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
``` | 
| 
	Newt007/bin_cls_att.h5 | 
	Newt007 | 2022-03-21T14:18:09Z | 0 | 0 | null | 
	[
  "region:us"
] | null | 2022-03-21T14:11:06Z | 
	Binary-classification model for malicious and benign requests
```
from keras import models
models.load_model('xxx.h5')
```
---
language: 
  - python 3.7
---
libraries:
  - keras==2.4.3
  - tensorflow==2.3.1
 | 
| 
	beston91/gpt2-xl_ft_logits_1k_2 | 
	beston91 | 2022-03-21T11:27:12Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "gpt2",
  "text-generation",
  "generated_from_trainer",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-20T22:16:05Z | 
	---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_logits_1k_2
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_logits_1k_2
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.4793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log        | 0.91  | 5    | 6.0743          |
| No log        | 1.91  | 10   | 6.1649          |
| No log        | 2.91  | 15   | 6.3068          |
| No log        | 3.91  | 20   | 6.4793          |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 17.59307861328125 | 
| 
	selimsametoglu/selims | 
	selimsametoglu | 2022-03-21T11:01:59Z | 7 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "bert",
  "text-classification",
  "generated_from_trainer",
  "dataset:tweet_eval",
  "license:mit",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2022-03-19T16:00:34Z | 
	---
license: mit
tags:
- generated_from_trainer
datasets:
- tweet_eval
model-index:
- name: selims
  results: []
widget:
- text: "I love conducting research on twins!"
  example_title: "Sentiment analysis - English"
- text: "Ja, ik vind het tweelingen onderzoek leuk maar complex, weet je."
  example_title: "Sentiment analysis - Dutch"
---
# selims
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on the tweet_eval dataset.
## Model description
This is a multilingual model for sentiment analysis that provides outputs ranging from 1 to 5, following the same logic as the 1 to 5-star reviews.
## Intended uses & limitations
This sentiment model can be applied to datasets in the following languages: English, Dutch, German, French, Spanish, and Italian. 
## Training and evaluation data
For fine-tunning of this model, the Tweet_eval dataset was used.
## Training procedure
Please refer to the information below:
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cpu
- Datasets 2.0.0
- Tokenizers 0.10.3
 | 
| 
	beston91/gpt2-xl_ft_logits_5k_2 | 
	beston91 | 2022-03-21T10:16:30Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "gpt2",
  "text-generation",
  "generated_from_trainer",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-20T23:02:24Z | 
	---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_logits_5k_2
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_logits_5k_2
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.2407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log        | 0.99  | 27   | 6.1106          |
| No log        | 1.99  | 54   | 6.1400          |
| No log        | 2.99  | 81   | 6.1875          |
| No log        | 3.99  | 108  | 6.2407          |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 17.59415626525879 | 
| 
	imkaushalpatel/YOLOv5 | 
	imkaushalpatel | 2022-03-21T09:50:21Z | 0 | 0 | null | 
	[
  "region:us"
] | null | 2022-03-21T09:49:14Z | 
	YOLOv5 🚀 is a family of compound-scaled object detection models trained on the COCO dataset, and includes simple functionality for Test Time Augmentation (TTA), model ensembling, hyperparameter evolution, and export to ONNX, CoreML and TFLite.
 | 
| 
	Ameer05/test | 
	Ameer05 | 2022-03-21T09:35:03Z | 18 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "bart",
  "text2text-generation",
  "summarization",
  "generated_from_trainer",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	summarization | 2022-03-21T08:16:45Z | 
	---
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: test
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [Ameer05/tokenizer-repo](https://huggingface.co/Ameer05/tokenizer-repo) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6109
- Rouge1: 54.9442
- Rouge2: 45.3299
- Rougel: 50.5219
- Rougelsum: 53.6475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1  | Rouge2  | Rougel  | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log        | 0.91  | 5    | 2.3705          | 53.62   | 44.3835 | 49.6135 | 52.693    |
| No log        | 1.91  | 10   | 1.9035          | 47.478  | 37.0934 | 39.7935 | 45.1881   |
| No log        | 2.91  | 15   | 1.7990          | 54.2488 | 45.0782 | 49.8421 | 52.7564   |
| No log        | 3.91  | 20   | 1.7125          | 55.7903 | 46.7554 | 52.2733 | 54.9389   |
| 2.4456        | 4.91  | 25   | 1.6421          | 52.2279 | 43.4391 | 49.6955 | 51.2915   |
| 2.4456        | 5.91  | 30   | 1.6102          | 55.8598 | 47.3293 | 53.1337 | 54.8596   |
| 2.4456        | 6.91  | 35   | 1.6164          | 53.7902 | 44.6622 | 49.5045 | 52.2304   |
| 2.4456        | 7.91  | 40   | 1.6015          | 51.5597 | 42.0333 | 47.9639 | 50.1154   |
| 1.239         | 8.91  | 45   | 1.6067          | 53.0301 | 43.7214 | 49.0227 | 51.8109   |
| 1.239         | 9.91  | 50   | 1.6109          | 54.9442 | 45.3299 | 50.5219 | 53.6475   |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
 | 
| 
	Yaxin/electra-small-discriminator-yelp-mlm | 
	Yaxin | 2022-03-21T09:21:02Z | 5 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "electra",
  "fill-mask",
  "generated_from_trainer",
  "dataset:yelp_review_full",
  "license:apache-2.0",
  "model-index",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	fill-mask | 2022-03-21T08:41:41Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: test-electra-small-yelp
  results:
  - task:
      name: Masked Language Modeling
      type: fill-mask
    dataset:
      name: yelp_review_full yelp_review_full
      type: yelp_review_full
      args: yelp_review_full
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.5677007577622891
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-electra-small-yelp
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the yelp_review_full yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2601
- Accuracy: 0.5677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0
- Datasets 1.18.3
- Tokenizers 0.11.0
 | 
| 
	doctorlan/autonlp-ctrip-653519223 | 
	doctorlan | 2022-03-21T09:01:53Z | 5 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "bert",
  "text-classification",
  "autonlp",
  "unk",
  "dataset:doctorlan/autonlp-data-ctrip",
  "co2_eq_emissions",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2022-03-21T08:38:42Z | 
	---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- doctorlan/autonlp-data-ctrip
co2_eq_emissions: 24.879856894708393
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 653519223
- CO2 Emissions (in grams): 24.879856894708393
## Validation Metrics
- Loss: 0.14671853184700012
- Accuracy: 0.9676666666666667
- Precision: 0.9794159885112494
- Recall: 0.9742857142857143
- AUC: 0.9901396825396825
- F1: 0.9768441155407017
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/doctorlan/autonlp-ctrip-653519223
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("doctorlan/autonlp-ctrip-653519223", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("doctorlan/autonlp-ctrip-653519223", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 
| 
	doctorlan/autonlp-JD-bert-653619233 | 
	doctorlan | 2022-03-21T08:54:10Z | 3 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "bert",
  "text-classification",
  "autonlp",
  "unk",
  "dataset:doctorlan/autonlp-data-JD-bert",
  "co2_eq_emissions",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2022-03-21T08:48:42Z | 
	---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- doctorlan/autonlp-data-JD-bert
co2_eq_emissions: 5.919372931976555
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 653619233
- CO2 Emissions (in grams): 5.919372931976555
## Validation Metrics
- Loss: 0.15083155035972595
- Accuracy: 0.952650883627876
- Precision: 0.9631399317406143
- Recall: 0.9412941961307538
- AUC: 0.9828776962419389
- F1: 0.9520917678812415
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/doctorlan/autonlp-JD-bert-653619233
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("doctorlan/autonlp-JD-bert-653619233", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("doctorlan/autonlp-JD-bert-653619233", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 
| 
	mrp/SimCSE-model-WangchanBERTa-V2 | 
	mrp | 2022-03-21T08:34:51Z | 7 | 1 | 
	sentence-transformers | 
	[
  "sentence-transformers",
  "pytorch",
  "camembert",
  "feature-extraction",
  "sentence-similarity",
  "transformers",
  "autotrain_compatible",
  "text-embeddings-inference",
  "endpoints_compatible",
  "region:us"
] | 
	sentence-similarity | 2022-03-21T08:33:54Z | 
	---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
    return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
    model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 221 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
  ```
  {'scale': 20.0, 'similarity_fct': 'cos_sim'}
  ```
Parameters of the fit()-Method:
```
{
    "epochs": 1,
    "evaluation_steps": 0,
    "evaluator": "NoneType",
    "max_grad_norm": 1,
    "optimizer_class": "<class 'transformers.optimization.AdamW'>",
    "optimizer_params": {
        "lr": 3e-05
    },
    "scheduler": "WarmupLinear",
    "steps_per_epoch": null,
    "warmup_steps": 10000,
    "weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
  (0): Transformer({'max_seq_length': 32, 'do_lower_case': False}) with Transformer model: CamembertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | 
| 
	IsaacSST/gpt2-xl-ft-d4-0.3 | 
	IsaacSST | 2022-03-21T04:24:22Z | 3 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "gpt2",
  "text-generation",
  "generated_from_trainer",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-21T01:38:11Z | 
	---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-d4-0.3
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-d4-0.3
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log        | 1.0   | 156  | 1.2334          |
| No log        | 2.0   | 312  | 1.2392          |
| No log        | 3.0   | 468  | 1.2944          |
| 1.1868        | 4.0   | 624  | 1.3401          |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	BigSalmon/InformalToFormalLincoln28 | 
	BigSalmon | 2022-03-21T03:14:50Z | 3 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "gpt2",
  "text-generation",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-21T03:03:13Z | 
	```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln28")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln28")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. 
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. 
***
original:
``` | 
| 
	dodobird/distilbert-base-uncased-finetuned-emotion | 
	dodobird | 2022-03-21T03:04:10Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "distilbert",
  "text-classification",
  "generated_from_trainer",
  "dataset:emotion",
  "license:apache-2.0",
  "model-index",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2022-03-21T00:37:04Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
  results:
  - task:
      name: Text Classification
      type: text-classification
    dataset:
      name: emotion
      type: emotion
      args: default
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.9245
    - name: F1
      type: f1
      value: 0.9248889383977278
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2154
- Accuracy: 0.9245
- F1: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1     |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8175        | 1.0   | 250  | 0.3139          | 0.9025   | 0.8986 |
| 0.2485        | 2.0   | 500  | 0.2154          | 0.9245   | 0.9249 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	saghar/xtremedistil-l6-h384-uncased-finetuned-wikitext103 | 
	saghar | 2022-03-20T23:45:34Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "bert",
  "fill-mask",
  "generated_from_trainer",
  "dataset:wikitext",
  "license:mit",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	fill-mask | 2022-03-19T23:21:13Z | 
	---
license: mit
tags:
- generated_from_trainer
datasets:
- wikitext
model-index:
- name: xtremedistil-l6-h384-uncased-finetuned-wikitext103
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtremedistil-l6-h384-uncased-finetuned-wikitext103
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) on the wikitext dataset.
It achieves the following results on the evaluation set:
- Loss: 6.5526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.1974        | 1.0   | 3125 | 6.7483          |
| 6.8171        | 2.0   | 6250 | 6.5962          |
| 6.7483        | 3.0   | 9375 | 6.5526          |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.11.0
- Datasets 1.1.1
- Tokenizers 0.10.1
 | 
| 
	aytugkaya/distilbert-base-uncased-finetuned-clinc | 
	aytugkaya | 2022-03-20T22:21:56Z | 11 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "distilbert",
  "text-classification",
  "generated_from_trainer",
  "dataset:clinc_oos",
  "license:apache-2.0",
  "model-index",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2022-03-20T16:49:55Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
  results:
  - task:
      name: Text Classification
      type: text-classification
    dataset:
      name: clinc_oos
      type: clinc_oos
      args: plus
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.9148387096774193
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7760
- Accuracy: 0.9148
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2994        | 1.0   | 318  | 3.3016          | 0.7442   |
| 2.6387        | 2.0   | 636  | 1.8892          | 0.8339   |
| 1.5535        | 3.0   | 954  | 1.1602          | 0.8948   |
| 1.0139        | 4.0   | 1272 | 0.8619          | 0.9084   |
| 0.7936        | 5.0   | 1590 | 0.7760          | 0.9148   |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
 | 
| 
	jcai1/similarity6 | 
	jcai1 | 2022-03-20T21:38:25Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "bert",
  "text-classification",
  "generated_from_trainer",
  "license:apache-2.0",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2022-03-20T21:32:15Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: similarity6
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# similarity6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1     |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log        | 1.0   | 393  | 0.2287          | 0.9341   | 0.9112 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	KoboldAI/GPT-Neo-2.7B-Shinen | 
	KoboldAI | 2022-03-20T18:49:18Z | 669 | 22 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "gpt_neo",
  "text-generation",
  "en",
  "license:mit",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-02T23:29:04Z | 
	---
language: en
license: mit
---
# GPT-Neo 2.7B - Shinen
## Model Description
GPT-Neo 2.7B-Shinen is a finetune created using EleutherAI's GPT-Neo 2.7B model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content.
**Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
## Training data
The training data contains user-generated stories from sexstories.com. All stories are tagged using the following way:
```
[Theme: <theme1>, <theme2> ,<theme3>]
<Story goes here>
```
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='KoboldAI/GPT-Neo-2.7B-Shinen')
>>> generator("She was staring at me", do_sample=True, min_length=50)
[{'generated_text': 'She was staring at me with a look that said it all. She wanted me so badly tonight that I wanted'}]
```
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo-Shinen was trained on a dataset known to contain profanity, lewd, and otherwise abrasive language. GPT-Neo-Shinen *WILL* produce socially unacceptable text without warning.
GPT-Neo-Shinen will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
### BibTeX entry and citation info
The model is made using the following software:
```bibtex
@software{gpt-neo,
  author       = {Black, Sid and
                  Leo, Gao and
                  Wang, Phil and
                  Leahy, Connor and
                  Biderman, Stella},
  title        = {{GPT-Neo: Large Scale Autoregressive Language 
                   Modeling with Mesh-Tensorflow}},
  month        = mar,
  year         = 2021,
  note         = {{If you use this software, please cite it using 
                   these metadata.}},
  publisher    = {Zenodo},
  version      = {1.0},
  doi          = {10.5281/zenodo.5297715},
  url          = {https://doi.org/10.5281/zenodo.5297715}
}
``` | 
| 
	KoboldAI/GPT-J-6B-Shinen | 
	KoboldAI | 2022-03-20T18:48:45Z | 1,746 | 24 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "gptj",
  "text-generation",
  "en",
  "arxiv:2101.00027",
  "license:mit",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-02T23:29:04Z | 
	---
language: en
license: mit
---
# GPT-J 6B - Shinen
## Model Description
GPT-J 6B-Shinen is a finetune created using EleutherAI's GPT-J 6B model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content.
**Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
## Training data
The training data contains user-generated stories from sexstories.com. All stories are tagged using the following way:
```
[Theme: <theme1>, <theme2> ,<theme3>]
<Story goes here>
```
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='KoboldAI/GPT-J-6B-Shinen')
>>> generator("She was staring at me", do_sample=True, min_length=50)
[{'generated_text': 'She was staring at me with a look that said it all. She wanted me so badly tonight that I wanted'}]
```
### Limitations and Biases
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
### BibTeX entry and citation info
The model uses the following model as base:
```bibtex
@misc{gpt-j,
  author = {Wang, Ben and Komatsuzaki, Aran},
  title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
  howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
  year = 2021,
  month = May
}
```
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
 | 
| 
	beston91/gpt2-xl_ft_mult_5k | 
	beston91 | 2022-03-20T17:31:57Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "gpt2",
  "text-generation",
  "generated_from_trainer",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-19T08:50:34Z | 
	---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_mult_5k
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_mult_5k
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log        | 0.99  | 27   | 6.3035          |
| No log        | 1.99  | 54   | 1.2709          |
| No log        | 2.99  | 81   | 0.7482          |
| No log        | 3.99  | 108  | 0.6758          |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 21.267963409423828
### Dataset Size
Size: 5000 | 
| 
	cammy/pegasus-cnn_dailymail-1000-lit-evalMA-ga1 | 
	cammy | 2022-03-20T16:07:53Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "pegasus",
  "text2text-generation",
  "generated_from_trainer",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text2text-generation | 2022-03-20T14:57:04Z | 
	---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-cnn_dailymail-1000-lit-evalMA-ga1
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-cnn_dailymail-1000-lit-evalMA-ga1
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6852
- Rouge1: 25.8242
- Rouge2: 11.1309
- Rougel: 20.7946
- Rougelsum: 22.5591
- Gen Len: 46.32
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1  | Rouge2  | Rougel  | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log        | 1.0   | 250  | 1.7061          | 25.8547 | 10.8573 | 20.8419 | 22.5942   | 44.36   |
| 1.4533        | 2.0   | 500  | 1.6876          | 26.105  | 11.5635 | 21.132  | 23.044    | 45.65   |
| 1.4533        | 3.0   | 750  | 1.6852          | 25.8242 | 11.1309 | 20.7946 | 22.5591   | 46.32   |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
 | 
| 
	cammy/pegasus-cnn_dailymail-1000-lit-evalMA-ga | 
	cammy | 2022-03-20T14:36:20Z | 5 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "pegasus",
  "text2text-generation",
  "generated_from_trainer",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text2text-generation | 2022-03-20T13:26:27Z | 
	---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-cnn_dailymail-1000-lit-evalMA-ga
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-cnn_dailymail-1000-lit-evalMA-ga
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6852
- Rouge1: 25.789
- Rouge2: 11.0694
- Rougel: 20.7716
- Rougelsum: 22.4851
- Gen Len: 46.32
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1  | Rouge2  | Rougel  | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log        | 1.0   | 250  | 1.7061          | 25.8286 | 10.8156 | 20.9502 | 22.6588   | 44.36   |
| 1.4533        | 2.0   | 500  | 1.6876          | 26.0862 | 11.5197 | 21.1282 | 23.0963   | 45.65   |
| 1.4533        | 3.0   | 750  | 1.6852          | 25.789  | 11.0694 | 20.7716 | 22.4851   | 46.32   |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
 | 
| 
	Slavka/distil-bert-finetuned-log-parser-1 | 
	Slavka | 2022-03-20T07:15:25Z | 6 | 0 | 
	transformers | 
	[
  "transformers",
  "tf",
  "distilbert",
  "question-answering",
  "generated_from_keras_callback",
  "license:apache-2.0",
  "endpoints_compatible",
  "region:us"
] | 
	question-answering | 2022-03-19T01:08:19Z | 
	---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distil-bert-finetuned-log-parser-1
  results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distil-bert-finetuned-log-parser-1
This model is a fine-tuned version of [distilbert-base-uncased-distilled-squad](https://huggingface.co/distilbert-base-uncased-distilled-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 33, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	willcai/wav2vec2_common_voice_accents_5 | 
	willcai | 2022-03-20T07:07:37Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "wav2vec2",
  "automatic-speech-recognition",
  "generated_from_trainer",
  "dataset:common_voice",
  "license:apache-2.0",
  "endpoints_compatible",
  "region:us"
] | 
	automatic-speech-recognition | 2022-03-19T22:07:12Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_common_voice_accents_5
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_common_voice_accents_5
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0027
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 48
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 384
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.4163        | 1.34  | 400  | 0.5520          |
| 0.3305        | 2.68  | 800  | 0.1698          |
| 0.2138        | 4.03  | 1200 | 0.1104          |
| 0.1714        | 5.37  | 1600 | 0.0944          |
| 0.1546        | 6.71  | 2000 | 0.0700          |
| 0.1434        | 8.05  | 2400 | 0.0610          |
| 0.1272        | 9.4   | 2800 | 0.0493          |
| 0.1183        | 10.74 | 3200 | 0.0371          |
| 0.1113        | 12.08 | 3600 | 0.0468          |
| 0.1013        | 13.42 | 4000 | 0.0336          |
| 0.0923        | 14.77 | 4400 | 0.0282          |
| 0.0854        | 16.11 | 4800 | 0.0410          |
| 0.0791        | 17.45 | 5200 | 0.0252          |
| 0.0713        | 18.79 | 5600 | 0.0128          |
| 0.0662        | 20.13 | 6000 | 0.0252          |
| 0.0635        | 21.48 | 6400 | 0.0080          |
| 0.0607        | 22.82 | 6800 | 0.0098          |
| 0.0557        | 24.16 | 7200 | 0.0069          |
| 0.0511        | 25.5  | 7600 | 0.0057          |
| 0.0474        | 26.85 | 8000 | 0.0046          |
| 0.045         | 28.19 | 8400 | 0.0037          |
| 0.0426        | 29.53 | 8800 | 0.0027          |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
 | 
| 
	espnet/ftshijt_espnet2_asr_dsing_hubert_conformer | 
	espnet | 2022-03-20T04:46:53Z | 1 | 0 | 
	espnet | 
	[
  "espnet",
  "audio",
  "automatic-speech-recognition",
  "dataset:dsing",
  "arxiv:1804.00015",
  "license:cc-by-4.0",
  "region:us"
] | 
	automatic-speech-recognition | 2022-03-20T04:45:28Z | 
	---
tags:
- espnet
- audio
- automatic-speech-recognition
language: noinfo
datasets:
- dsing
license: cc-by-4.0
---
## ESPnet2 ASR model 
### `espnet/ftshijt_espnet2_asr_dsing_hubert_conformer`
This model was trained by jiatong using dsing recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
pip install -e .
cd egs2/dsing/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/ftshijt_espnet2_asr_dsing_hubert_conformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sat Mar 19 23:02:37 EDT 2022`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58)  [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `c1ed71c6899e54c0b3dad82687886b1183cd0885`
  - Commit date: `Wed Mar 16 23:34:49 2022 -0400`
## asr_train_asr_conformer7_hubert_ll60k_large_raw_bpe500_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_latest/dev|482|4018|83.6|9.4|7.0|6.4|22.8|58.3|
|decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_latest/test|480|4632|81.4|12.3|6.3|4.5|23.1|52.1|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_latest/dev|482|18692|88.5|3.1|8.4|5.9|17.4|58.3|
|decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_latest/test|480|21787|87.9|4.3|7.8|4.5|16.6|52.1|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_latest/dev|482|6097|82.2|7.1|10.7|5.7|23.5|58.3|
|decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_latest/test|480|7736|81.7|9.2|9.1|4.0|22.3|52.1|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer7_hubert_ll60k_large.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer7_hubert_ll60k_large_raw_bpe500_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 35
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
-   - valid
    - acc
    - max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 8
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_bpe500_sp/train/speech_shape
- exp/asr_stats_raw_bpe500_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_bpe500_sp/valid/speech_shape
- exp/asr_stats_raw_bpe500_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
-   - dump/raw/train30_sp/wav.scp
    - speech
    - kaldi_ark
-   - dump/raw/train30_sp/text
    - text
    - text
valid_data_path_and_name_and_type:
-   - dump/raw/dev/wav.scp
    - speech
    - kaldi_ark
-   - dump/raw/dev/text
    - text
    - text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
    lr: 0.0025
scheduler: warmuplr
scheduler_conf:
    warmup_steps: 40000
token_list:
- <blank>
- <unk>
- ▁I
- ''''
- ▁YOU
- S
- T
- ▁THE
- M
- ▁ME
- ▁A
- ▁AND
- ▁TO
- E
- A
- ING
- D
- ▁MY
- ▁
- O
- ▁IT
- I
- N
- RE
- Y
- ▁BE
- ▁IN
- ▁ON
- ▁LOVE
- U
- ▁WE
- LL
- H
- ▁YOUR
- ▁S
- IN
- ▁OF
- ▁DO
- ▁THAT
- ▁ALL
- L
- ▁DON
- ▁OH
- ▁LIKE
- ▁KNOW
- ▁FOR
- ▁CAN
- ▁JUST
- P
- ▁BUT
- ED
- K
- ▁WHEN
- ▁SO
- R
- ▁GO
- ▁WHAT
- ▁C
- ▁WITH
- W
- ▁F
- C
- ▁NO
- ER
- ▁ONE
- ▁LET
- VE
- ES
- ▁NOW
- ▁BABY
- G
- ▁GOT
- ▁COME
- CAUSE
- LE
- B
- ▁B
- AR
- ▁UP
- ▁'
- ▁W
- ▁SEE
- ▁TIME
- ▁ARE
- ▁G
- ▁LOOK
- ▁THIS
- F
- ▁IS
- ▁NEVER
- ▁M
- ▁P
- AN
- ▁WAS
- ▁WAY
- ▁IF
- OR
- ▁SAY
- V
- ▁R
- ▁T
- ▁DOWN
- RA
- ▁THERE
- ▁HEART
- ▁NOT
- RO
- ▁WILL
- ▁OUT
- CE
- ▁WANT
- ▁YEAH
- ▁HAVE
- ▁GIVE
- ▁TOO
- ▁GONNA
- ▁HOW
- ▁NEED
- ▁GET
- ▁TAKE
- ▁EVERY
- ▁FEEL
- ▁HE
- EN
- ▁FROM
- ▁HA
- ▁K
- ▁SHE
- 'ON'
- ▁DI
- RI
- ▁ONLY
- NE
- ▁WHO
- ▁AWAY
- ▁E
- ▁D
- ▁LIFE
- ▁MAKE
- IC
- ▁BACK
- ▁WHERE
- ▁MADE
- ▁DAY
- ▁HERE
- ▁LO
- ▁HER
- ▁AS
- ▁GOOD
- ▁WANNA
- ▁OOH
- ▁TELL
- LY
- TH
- ▁WON
- ▁LIGHT
- ▁KEEP
- ▁MA
- ▁LA
- ▁SH
- ▁WORLD
- ▁MORE
- ▁LI
- AL
- ▁COULD
- ▁GIRL
- ▁NOTHING
- ▁EVER
- ▁THINK
- IE
- ▁BY
- ▁AT
- ▁TONIGHT
- ▁THEY
- ▁CALL
- ▁HO
- ▁WOULD
- IL
- ▁OUR
- ▁FALL
- ▁NIGHT
- ▁THAN
- ▁DE
- ▁SOME
- ▁WAIT
- ▁RIGHT
- ▁RE
- ▁HALLELUJAH
- ▁TH
- NG
- ▁CO
- ▁WERE
- ▁TALK
- ET
- ▁BO
- ▁HOLD
- UR
- ▁BEEN
- ▁US
- ▁PA
- VER
- ▁EYES
- ▁DREAM
- ▁SONG
- ▁SHOULD
- ▁STILL
- ▁OVER
- TA
- ▁ANYMORE
- IGHT
- ▁STAY
- ▁BETTER
- LESS
- ▁THROUGH
- ▁LITTLE
- X
- ▁GONE
- ▁AIN
- ▁DA
- ▁HOLDING
- ▁HURT
- ▁TRY
- ▁FIND
- Z
- DE
- ▁LAST
- ▁SAID
- ▁ALWAYS
- ▁BODY
- ▁MIND
- ▁CRY
- ▁EVEN
- ▁RUN
- ▁HOPE
- ▁WITHOUT
- ▁MISS
- ▁ABOUT
- ▁HAND
- ▁J
- ▁AGAIN
- ▁THOUGH
- ▁NAH
- ▁LIVE
- ▁BA
- ▁OLD
- ▁HEAD
- ▁FIRE
- ▁MAN
- ▁SOMETHING
- ▁WHY
- THER
- ▁HOME
- ▁OR
- ▁INSIDE
- ▁NEW
- ▁HEY
- TION
- ▁EVERYTHING
- ▁HAD
- ▁SOMETIMES
- ▁HARD
- ▁TOUCH
- ▁HEAR
- ▁AM
- ▁MUCH
- ▁LONG
- ▁STAR
- GETTING
- ▁WALK
- ▁PEOPLE
- ▁BEFORE
- ▁CLOSE
- ▁TWO
- ▁FAR
- ▁SHOW
- ▁STAND
- ▁LOSE
- ▁HELP
- ▁NAME
- ▁BOY
- ▁TRUE
- ▁PLAY
- ▁DARK
- ▁THINGS
- ▁NA
- ▁TEAR
- ▁END
- ▁NOBODY
- ▁SEA
- ▁ROCKABYE
- ▁BELIEVE
- ▁BROKE
- ▁AROUND
- ▁START
- ▁KISS
- ▁FEELING
- ▁BREAK
- ▁SOMEONE
- ▁FRIEND
- ▁ALONE
- ▁BEAUTIFUL
- ▁CRAZY
- ▁OWN
- OSE
- ▁STOP
- ▁LOST
- ▁HIM
- ▁BAD
- ▁CHANCE
- ▁REALLY
- ▁WISH
- ▁MOVE
- ▁SKY
- ▁PLACE
- AKE
- ▁LEAVE
- ▁YA
- ▁STRONG
- ▁PUT
- ▁OPEN
- ▁WRONG
- ▁COLD
- OCK
- ▁USED
- ▁FOUND
- ▁LONELY
- ▁DANCE
- EACH
- ▁ANOTHER
- ▁SIDE
- ▁UNDER
- ▁MATTER
- ▁THESE
- ▁CARE
- ▁MINE
- ▁SHINE
- ▁AFRAID
- ▁TURN
- ▁PLEASE
- ▁SUN
- ▁DIAMOND
- ▁UNTIL
- ▁FACE
- ▁LEARN
- ▁TRUST
- ▁WONDER
- ▁BREATH
- ATE
- ▁SORRY
- ▁HU
- ▁WATCH
- ▁LATE
- ROUND
- ▁ARMS
- ▁PERFECT
- ▁MAYBE
- ▁PULL
- ▁REMEMBER
- ▁FIGHT
- ▁MYSELF
- ▁INTO
- ▁DARLING
- ▁THUNDER
- ▁FOLLOW
- ▁REASON
- ▁BURN
- ▁HIS
- ▁MUST
- ▁FREE
- ▁FLASHLIGHT
- ▁1
- ▁ENOUGH
- ▁DRINK
- ▁WORDS
- ▁HIDE
- ▁UN
- ▁FORGET
- ▁SURE
- ▁CHANGE
- ▁SMILE
- ▁PROMISE
- ▁FOREVER
- '2'
- ▁SWEET
- ▁SAME
- ▁OOOH
- ▁PART
- ▁SOMEBODY
- NESS
- ▁BRIGHT
- ▁HEAVEN
- ▁DEEP
- ▁HIGH
- ▁INSTEAD
- ▁MOMENT
- ▁ALONG
- ▁ALRIGHT
- ▁SLOW
- ▁TOMORROW
- ▁SOUL
- ▁QU
- ▁PUSH
- ▁CHANDELIER
- ▁LEFT
- SIDE
- ▁TOLD
- ▁KNEW
- READY
- ▁LOVING
- ▁SAW
- '3'
- ▁WORK
- ▁DANCING
- ▁THREE
- ▁SAVE
- ▁SHOOT
- ▁LEAD
- ▁SKI
- ▁WILD
- ▁WIND
- ▁WHILE
- ▁EDGE
- ▁HAPPY
- ▁FEAR
- STUCK
- ▁MOST
- ▁LISTEN
- ▁WOAH
- ▁FIRST
- ▁JOLENE
- ▁VOICE
- ▁COMP
- ▁MILLION
- FUL
- ▁OOOOOH
- ▁CAME
- ▁RISE
- ▁NEXT
- ▁COUNT
- ▁MOUNTAIN
- ▁ROOM
- ▁BLUE
- ▁HIT
- ▁RAISE
- J
- ▁THOUSAND
- ▁SHAP
- ▁TREAT
- ▁DRY
- ▁FINALLY
- ▁TITANIUM
- ▁CARRY
- ▁TRUTH
- ▁WATER
- ▁MORNING
- TIME
- ▁BELONG
- ▁UMA
- ▁ALIVE
- ▁ELSE
- ▁ANGEL
- ▁BRAND
- ▁APART
- ▁EVERYBODY
- ▁SOUND
- ▁GUESS
- ▁PRAY
- ▁FAITH
- ▁AFTER
- ▁THROW
- ▁TRIED
- ▁SLEEP
- ▁FOOL
- ▁DISCOVERING
- ▁FUCK
- ▁TASTE
- ▁UNDERSTAND
- ▁SHAME
- ▁POWER
- ▁WELCOME
- ▁FELT
- ▁SAFE
- ▁DESERVE
- ▁GAME
- ▁SUPERMA
- ▁SWEAR
- ▁BETWEEN
- ▁GLASS
- ▁CATCH
- ▁TOGETHER
- '0'
- '4'
- '6'
- '5'
- '1'
- '8'
- '7'
- '9'
- Q
- <sos/eos>
init: null
input_size: null
ctc_conf:
    dropout_rate: 0.0
    ctc_type: builtin
    reduce: true
    ignore_nan_grad: true
joint_net_conf: null
model_conf:
    ctc_weight: 0.3
    lsm_weight: 0.1
    length_normalized_loss: false
    extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: bpe
bpemodel: data/token_list/bpe_unigram500/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
    frontend_conf:
        upstream: hubert_large_ll60k
    download_dir: ./hub
    multilayer_feature: true
    fs: 16k
specaug: specaug
specaug_conf:
    apply_time_warp: true
    time_warp_window: 5
    time_warp_mode: bicubic
    apply_freq_mask: true
    freq_mask_width_range:
    - 0
    - 30
    num_freq_mask: 2
    apply_time_mask: true
    time_mask_width_range:
    - 0
    - 40
    num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: linear
preencoder_conf:
    input_size: 1024
    output_size: 80
encoder: conformer
encoder_conf:
    output_size: 512
    attention_heads: 8
    linear_units: 2048
    num_blocks: 12
    dropout_rate: 0.1
    positional_dropout_rate: 0.1
    attention_dropout_rate: 0.1
    input_layer: conv2d2
    normalize_before: true
    macaron_style: true
    pos_enc_layer_type: rel_pos
    selfattention_layer_type: rel_selfattn
    activation_type: swish
    use_cnn_module: true
    cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
    attention_heads: 8
    linear_units: 2048
    num_blocks: 6
    dropout_rate: 0.1
    positional_dropout_rate: 0.1
    self_attention_dropout_rate: 0.1
    src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
  author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
  title={{ESPnet}: End-to-End Speech Processing Toolkit},
  year={2018},
  booktitle={Proceedings of Interspeech},
  pages={2207--2211},
  doi={10.21437/Interspeech.2018-1456},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
  title={ESPnet: End-to-End Speech Processing Toolkit}, 
  author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
  year={2018},
  eprint={1804.00015},
  archivePrefix={arXiv},
  primaryClass={cs.CL}
}
```
 | 
| 
	Wikidepia/gpt2-spam | 
	Wikidepia | 2022-03-20T01:10:59Z | 4 | 1 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "gpt2",
  "text-generation",
  "generated_from_trainer",
  "license:apache-2.0",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-20T01:08:27Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt2-spam
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-spam
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	beston91/gpt2-xl_ft_mult_1k | 
	beston91 | 2022-03-19T23:56:20Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "gpt2",
  "text-generation",
  "generated_from_trainer",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-18T23:49:34Z | 
	---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_mult_1k
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_mult_1k
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log        | 0.91  | 5    | 6.7968          |
| No log        | 1.91  | 10   | 6.6621          |
| No log        | 2.91  | 15   | 6.4335          |
| No log        | 3.91  | 20   | 6.1137          |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	msamogh/autonlp-cai-out-of-scope-649919112 | 
	msamogh | 2022-03-19T21:40:41Z | 3 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "distilbert",
  "text-classification",
  "autonlp",
  "en",
  "dataset:msamogh/autonlp-data-cai-out-of-scope",
  "co2_eq_emissions",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2022-03-19T21:40:14Z | 
	---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- msamogh/autonlp-data-cai-out-of-scope
co2_eq_emissions: 0.49924480682533606
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 649919112
- CO2 Emissions (in grams): 0.49924480682533606
## Validation Metrics
- Loss: 0.49354293942451477
- Accuracy: 0.8064516129032258
- Precision: 0.8181818181818182
- Recall: 0.9
- AUC: 0.8689393939393939
- F1: 0.8571428571428572
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/msamogh/autonlp-cai-out-of-scope-649919112
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("msamogh/autonlp-cai-out-of-scope-649919112", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("msamogh/autonlp-cai-out-of-scope-649919112", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 
| 
	ronykroy/distilbert-base-uncased-finetuned-emotion | 
	ronykroy | 2022-03-19T17:55:13Z | 15 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "distilbert",
  "text-classification",
  "generated_from_trainer",
  "dataset:emotion",
  "license:apache-2.0",
  "model-index",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2022-03-19T17:30:38Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
  results:
  - task:
      name: Text Classification
      type: text-classification
    dataset:
      name: emotion
      type: emotion
      args: default
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.922
    - name: F1
      type: f1
      value: 0.9222310284051585
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2334
- Accuracy: 0.922
- F1: 0.9222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1     |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8454        | 1.0   | 250  | 0.3308          | 0.8975   | 0.8937 |
| 0.2561        | 2.0   | 500  | 0.2334          | 0.922    | 0.9222 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
 | 
| 
	sanchit-gandhi/wav2vec2-2-gpt2-no-adapter-regularisation | 
	sanchit-gandhi | 2022-03-19T17:43:39Z | 5 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "speech-encoder-decoder",
  "automatic-speech-recognition",
  "generated_from_trainer",
  "dataset:librispeech_asr",
  "endpoints_compatible",
  "region:us"
] | 
	automatic-speech-recognition | 2022-03-17T16:34:45Z | 
	---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7494
- Wer: 1.0532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step  | Validation Loss | Wer    |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.4828        | 2.8   | 2500  | 4.0554          | 1.7873 |
| 0.8683        | 5.61  | 5000  | 2.5401          | 1.3156 |
| 0.4394        | 8.41  | 7500  | 1.7519          | 1.1129 |
| 0.0497        | 11.21 | 10000 | 1.7102          | 1.0738 |
| 0.031         | 14.01 | 12500 | 1.7395          | 1.0512 |
| 0.0508        | 16.82 | 15000 | 1.7254          | 1.0463 |
| 0.0462        | 19.62 | 17500 | 1.7494          | 1.0532 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
 | 
| 
	ShahafAricha/nqg-gpt2 | 
	ShahafAricha | 2022-03-19T17:20:23Z | 5 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "gpt2",
  "text-generation",
  "license:other",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-18T21:51:57Z | 
	---
license: other
---
---
datasets:
- squad
tags:
- question-generation
widget:
- text: "The Technikum was conceived in the early 1900s by the German-Jewish fund Ezrah as a school of [HL]engineering and sciences[HL].[SEP]"
---
# Transformer QG on SQuAD
HLQG is Proposed by [Ying-Hong Chan & Yao-Chung Fan. (2019). A Re-current BERT-based Model for Question Generation.](https://www.aclweb.org/anthology/D19-5821/)
**This is a Reproduce Version from distilled squad dataset**
More detail: [p208p2002/Transformer-QG-on-SQuAD](https://github.com/p208p2002/Transformer-QG-on-SQuAD)
## Usage
### Input Format
```
C' = [c1, c2, ..., [HL], a1, ..., a|A|, [HL], ..., c|C|]
``` | 
| 
	sanchit-gandhi/wav2vec2-2-gpt2-regularisation | 
	sanchit-gandhi | 2022-03-19T17:11:48Z | 6 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "speech-encoder-decoder",
  "automatic-speech-recognition",
  "generated_from_trainer",
  "dataset:librispeech_asr",
  "endpoints_compatible",
  "region:us"
] | 
	automatic-speech-recognition | 2022-03-17T16:34:24Z | 
	---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8529
- Wer: 0.9977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step  | Validation Loss | Wer    |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5506        | 2.8   | 2500  | 4.4928          | 1.8772 |
| 0.5145        | 5.61  | 5000  | 1.8942          | 1.1063 |
| 0.2736        | 8.41  | 7500  | 1.6550          | 1.0372 |
| 0.0807        | 11.21 | 10000 | 1.7601          | 1.0004 |
| 0.0439        | 14.01 | 12500 | 1.8014          | 1.0022 |
| 0.043         | 16.82 | 15000 | 1.8534          | 1.0097 |
| 0.0434        | 19.62 | 17500 | 1.8529          | 0.9977 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
 | 
| 
	xyfigo/distilbert-base-uncased-finetuned-emotion | 
	xyfigo | 2022-03-19T15:30:31Z | 6 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "distilbert",
  "text-classification",
  "generated_from_trainer",
  "dataset:emotion",
  "license:apache-2.0",
  "model-index",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2022-03-19T15:10:17Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
  results:
  - task:
      name: Text Classification
      type: text-classification
    dataset:
      name: emotion
      type: emotion
      args: default
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.928
    - name: F1
      type: f1
      value: 0.9281714323715586
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2286
- Accuracy: 0.928
- F1: 0.9282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1     |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8579        | 1.0   | 250  | 0.3272          | 0.903    | 0.9008 |
| 0.2543        | 2.0   | 500  | 0.2286          | 0.928    | 0.9282 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
 | 
| 
	richardc7/electricidad-small-finetuned-amazon-review-classification | 
	richardc7 | 2022-03-19T15:29:47Z | 8 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "electra",
  "text-classification",
  "generated_from_trainer",
  "dataset:amazon_reviews_multi",
  "model-index",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2022-03-17T12:37:33Z | 
	---
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: electricidad-small-finetuned-amazon-review-classification
  results:
  - task:
      name: Text Classification
      type: text-classification
    dataset:
      name: amazon_reviews_multi
      type: amazon_reviews_multi
      args: es
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.581
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electricidad-small-finetuned-amazon-review-classification
This model is a fine-tuned version of [mrm8488/electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9601
- Accuracy: 0.581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step  | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0136        | 1.0   | 25000 | 1.0153          | 0.5414   |
| 0.9416        | 2.0   | 50000 | 0.9942          | 0.5576   |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	IsaacSST/gpt2-xl-ft-d3 | 
	IsaacSST | 2022-03-19T15:18:26Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "gpt2",
  "text-generation",
  "generated_from_trainer",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-19T12:41:36Z | 
	---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-d3
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-d3
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log        | 1.0   | 156  | 1.2135          |
| No log        | 2.0   | 312  | 1.2181          |
| No log        | 3.0   | 468  | 1.2754          |
| 1.1743        | 4.0   | 624  | 1.3252          |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	IsaacSST/gpt2-xl-ft-d2 | 
	IsaacSST | 2022-03-18T20:51:40Z | 3 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "gpt2",
  "text-generation",
  "generated_from_trainer",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-18T18:10:19Z | 
	---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-d2
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-d2
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log        | 1.0   | 156  | 1.2309          |
| No log        | 2.0   | 312  | 1.2382          |
| No log        | 3.0   | 468  | 1.2997          |
| 1.172         | 4.0   | 624  | 1.3483          |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	saghar/MiniLMv2-L6-H768-distilled-from-RoBERTa-Large-finetuned-wikitext103 | 
	saghar | 2022-03-18T19:10:05Z | 5 | 1 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "roberta",
  "fill-mask",
  "generated_from_trainer",
  "dataset:wikitext",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	fill-mask | 2022-03-16T03:59:15Z | 
	---
tags:
- generated_from_trainer
datasets:
- wikitext
model-index:
- name: MiniLMv2-L6-H768-distilled-from-RoBERTa-Large-finetuned-wikitext103
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L6-H768-distilled-from-RoBERTa-Large-finetuned-wikitext103
This model is a fine-tuned version of [nreimers/MiniLMv2-L6-H768-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L6-H768-distilled-from-RoBERTa-Large) on the wikitext dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.6806        | 1.0   | 3125 | 3.9691          |
| 4.0441        | 2.0   | 6250 | 3.7885          |
| 3.9509        | 3.0   | 9375 | 3.7556          |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.3
 | 
| 
	huggingtweets/sappublicsector | 
	huggingtweets | 2022-03-18T17:46:32Z | 3 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "gpt2",
  "text-generation",
  "huggingtweets",
  "en",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-18T17:37:33Z | 
	---
language: en
thumbnail: http://www.huggingtweets.com/sappublicsector/1647625586483/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
    <div class="flex">
        <div
			style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1486782108030930950/2JS43mTA_400x400.jpg')">
        </div>
        <div
            style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
        </div>
        <div
            style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
        </div>
    </div>
    <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
    <div style="text-align: center; font-size: 16px; font-weight: 800">SAP Public Sector</div>
    <div style="text-align: center; font-size: 14px;">@sappublicsector</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from SAP Public Sector.
| Data | SAP Public Sector |
| --- | --- |
| Tweets downloaded | 3200 |
| Retweets | 38 |
| Short tweets | 0 |
| Tweets kept | 3162 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2alb74qi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sappublicsector's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/sppp2pwd) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/sppp2pwd/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
                     model='huggingtweets/sappublicsector')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
 | 
| 
	TestSB3/ppo-CartPole-v1 | 
	TestSB3 | 2022-03-18T13:41:36Z | 0 | 0 | null | 
	[
  "gym",
  "reinforcement-learning",
  "region:us"
] | 
	reinforcement-learning | 2022-03-18T10:20:52Z | 
	---
tags:
- gym
- reinforcement-learning
---
# TestSB3/ppo-CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1 using the [rl-baselines3-zoo](https://github.com/DLR-RM/rl-baselines3-zoo) library.
## Usage (with RL-baselines3-zoo)
Just clone the [rl-baselines3-zoo](https://github.com/DLR-RM/rl-baselines3-zoo) library.
Then run:
```python
python  enjoy.py --algo ppo --env CartPole-v1
```
## Evaluation Results
Mean Reward: 500.0 +/- 0.0 (300 test episodes)
## Citing the Project
To cite this repository in publications:
```
@misc{rl-zoo3,
  author = {Raffin, Antonin},
  title = {RL Baselines3 Zoo},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/DLR-RM/rl-baselines3-zoo}},
}
``` | 
| 
	reichenbach/switch-transformer-classification | 
	reichenbach | 2022-03-18T10:53:01Z | 2 | 2 | 
	tf-keras | 
	[
  "tf-keras",
  "generic",
  "switch-transformers",
  "mixture-of-experts",
  "arxiv:2101.03961",
  "region:us"
] | null | 2022-03-06T09:20:19Z | 
	---
tags:
- generic
- switch-transformers
- mixture-of-experts
---
## Tensorflow Keras Implementation of Switch Transformers for Text Classification.
This repo contains the models [Switch Transformers for Text Classification](https://keras.io/examples/nlp/text_classification_with_switch_transformer/).
Credits: [Khalid Salama](https://www.linkedin.com/in/khalid-salama-24403144/) - Original Author
HF Contribution: [Rishav Chandra Varma](https://huggingface.co/reichenbach)  
## Background Information
### Introduction
In this example, we demonstrates implementation of the [Switch Transformer](https://arxiv.org/abs/2101.03961) model for text classification. For the purpose of this example, we are imdb dataset present in Keras Module.
### What is specialty of Switch Transformer ?
The Switch Transformer replaces the feed forward network (FFN) layer in the standard Transformer with a Mixture of Expert (MoE) routing layer, where each expert operates independently on the tokens in the sequence. This allows increasing the model size without increasing the computation needed to process each example.
Note that, for training the Switch Transformer efficiently, data and model parallelism need to be applied, so that expert modules can run simultaneously, each on its own accelerator. While the implementation described in the paper uses the [TensorFlow Mesh](https://github.com/tensorflow/mesh) framework for distributed training, this example presents a simple, non-distributed implementation of the Switch Transformer model for demonstration purposes.
 | 
| 
	sven-nm/roberta_classics_ner | 
	sven-nm | 2022-03-18T10:14:20Z | 22 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "roberta",
  "token-classification",
  "classics",
  "citation mining",
  "en",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	token-classification | 2022-03-02T23:29:05Z | 
	---
language: 
  - en
tags:
- classics
- citation mining
widget:
- text: "Homer's Iliad opens with an invocation to the muse (1. 1)."
---
### Model and entities
`roberta_classics_ner` is a domain-specific RoBERTa-based model for named entity recognition in Classical Studies. It recognises bibliographical entities, such as:
| id  | label         | desciption                                  | Example               |
| --- | ------------- | ------------------------------------------- | --------------------- |
| 0   | 'O'           | Out of entity                               |                       | 
| 1   | 'B-AAUTHOR'   | Ancient authors                             | *Herodotus*           |
| 2   | 'I-AAUTHOR'   |                                             |                       |
| 3   | 'B-AWORK'     | The title of an ancient work                | *Symposium*, *Aeneid* |
| 4   | 'I-AWORK'     |                                             |                       |
| 5   | 'B-REFAUWORK' | A structured reference to an ancient work   | *Homer, Il.*          |
| 6   | 'I-REFAUWORK' |                                             |                       |
| 7   | 'B-REFSCOPE'  | The scope of a reference                    | *II.1.993a30–b11*     |
| 8   | 'I-REFSCOPE'  |                                             |                       |
| 9   | 'B-FRAGREF'   | A reference to fragmentary texts or scholia | *Frag. 19. West*      |
| 10  | 'I-FRAGREF'   |                                             |                       |
### Example
```
B-AAUTHOR   B-AWORK                                      B-REFSCOPE
Homer  's   Iliad opens with an invocation to the muse ( 1. 1).
```
### Dataset
`roberta_classics_ner` was fine-tuned and evaluated on `EpiBau`, a dataset which has not been released publicly yet. It is composed of four volumes of [Structures of Epic Poetry](https://www.epische-bauformen.uni-rostock.de/), a compendium on the narrative patterns and structural elements in ancient epic. 
Entity counts of the `Epibau` dataset are the following: 
|                | train-set | dev-set | test-set |
| -------------- | --------- | ------- | -------- |
| word count     | 712462    | 125729  | 122324   |
| AAUTHOR        | 4436      | 1368    | 1511     |
| AWORK          | 3145      | 780     | 670      |
| REFAUWORK      | 5102      | 988     | 1209     |
| REFSCOPE       | 14768     | 3193    | 2847     |
| FRAGREF        | 266       | 29      | 33       |
| total entities | 13822     | 1415    | 2419     |
### Results
The model was developed in the context of experiments reported [here](http://infoscience.epfl.ch/record/291236?&ln=en).Trained and tested on `EpiBau` with a 85-15 split, the model yields a general F1 score of **.82** (micro-averages). Detailed scores are displayed below. Evaluation was performed with the [CLEF-HIPE-scorer](https://github.com/impresso/CLEF-HIPE-2020-scorer), in strict mode) 
| metric    | AAUTHOR | AWORK | REFSCOPE | REFAUWORK |
| --------- | ------- | ----- | -------- | --------- |
| F1        | .819    | .796  | .863     | .756      |
| Precision | .842    | .818  | .860     | .755      |
| Recall    | .797    | .766  | .756     | .866      | 
Questions, remarks, help or contribution ? Get in touch [here](https://github.com/AjaxMultiCommentary), we'll be happy to chat !  
 | 
| 
	aaraki/bert-base-uncased-finetuned-swag | 
	aaraki | 2022-03-18T08:16:58Z | 1 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "bert",
  "multiple-choice",
  "generated_from_trainer",
  "dataset:swag",
  "license:apache-2.0",
  "endpoints_compatible",
  "region:us"
] | 
	multiple-choice | 2022-03-18T06:29:45Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-swag
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5155
- Accuracy: 0.8002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6904        | 1.0   | 4597 | 0.5155          | 0.8002   |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	moshew/paraphrase-mpnet-base-v2_SetFit_sst2 | 
	moshew | 2022-03-18T07:53:15Z | 1 | 1 | 
	sentence-transformers | 
	[
  "sentence-transformers",
  "pytorch",
  "mpnet",
  "feature-extraction",
  "sentence-similarity",
  "transformers",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	sentence-similarity | 2022-03-18T07:53:07Z | 
	---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# moshew/paraphrase-mpnet-base-v2_SetFit_sst2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('moshew/paraphrase-mpnet-base-v2_SetFit_sst2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[0] #First element of model_output contains all token embeddings
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('moshew/paraphrase-mpnet-base-v2_SetFit_sst2')
model = AutoModel.from_pretrained('moshew/paraphrase-mpnet-base-v2_SetFit_sst2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
    model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=moshew/paraphrase-mpnet-base-v2_SetFit_sst2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 8650 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` 
Parameters of the fit()-Method:
```
{
    "epochs": 1,
    "evaluation_steps": 0,
    "evaluator": "NoneType",
    "max_grad_norm": 1,
    "optimizer_class": "<class 'transformers.optimization.AdamW'>",
    "optimizer_params": {
        "lr": 2e-05
    },
    "scheduler": "WarmupLinear",
    "steps_per_epoch": null,
    "warmup_steps": 10,
    "weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | 
| 
	moshew/paraphrase-mpnet-base-v2_SetFit_emotions | 
	moshew | 2022-03-18T07:16:29Z | 3 | 0 | 
	sentence-transformers | 
	[
  "sentence-transformers",
  "pytorch",
  "mpnet",
  "feature-extraction",
  "sentence-similarity",
  "transformers",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	sentence-similarity | 2022-03-18T07:16:19Z | 
	---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# moshew/paraphrase-mpnet-base-v2_SetFit_emotions
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('moshew/paraphrase-mpnet-base-v2_SetFit_emotions')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[0] #First element of model_output contains all token embeddings
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('moshew/paraphrase-mpnet-base-v2_SetFit_emotions')
model = AutoModel.from_pretrained('moshew/paraphrase-mpnet-base-v2_SetFit_emotions')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
    model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=moshew/paraphrase-mpnet-base-v2_SetFit_emotions)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2500 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` 
Parameters of the fit()-Method:
```
{
    "epochs": 1,
    "evaluation_steps": 0,
    "evaluator": "NoneType",
    "max_grad_norm": 1,
    "optimizer_class": "<class 'transformers.optimization.AdamW'>",
    "optimizer_params": {
        "lr": 2e-05
    },
    "scheduler": "WarmupLinear",
    "steps_per_epoch": null,
    "warmup_steps": 10,
    "weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | 
| 
	youzanai/bert-product-title-chinese | 
	youzanai | 2022-03-18T06:19:06Z | 6 | 3 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "bert",
  "fill-mask",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	fill-mask | 2022-03-02T23:29:05Z | 
	基于有赞商品标题语料训练的bert模型。
模型示例代码参考 https://github.com/youzanai/trexpark | 
| 
	brad1141/Longformer-finetuned-norm | 
	brad1141 | 2022-03-18T05:42:11Z | 61 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "longformer",
  "token-classification",
  "generated_from_trainer",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	token-classification | 2022-03-18T02:29:24Z | 
	---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Longformer-finetuned-norm
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Longformer-finetuned-norm
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8127
- Precision: 0.8429
- Recall: 0.8701
- F1: 0.8562
- Accuracy: 0.8221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1     | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.8008        | 1.0   | 1012 | 0.5839          | 0.8266    | 0.8637 | 0.8447 | 0.8084   |
| 0.5168        | 2.0   | 2024 | 0.5927          | 0.7940    | 0.9102 | 0.8481 | 0.8117   |
| 0.3936        | 3.0   | 3036 | 0.5651          | 0.8476    | 0.8501 | 0.8488 | 0.8143   |
| 0.2939        | 4.0   | 4048 | 0.6411          | 0.8494    | 0.8578 | 0.8536 | 0.8204   |
| 0.2165        | 5.0   | 5060 | 0.6833          | 0.8409    | 0.8822 | 0.8611 | 0.8270   |
| 0.1561        | 6.0   | 6072 | 0.7643          | 0.8404    | 0.8810 | 0.8602 | 0.8259   |
| 0.1164        | 7.0   | 7084 | 0.8127          | 0.8429    | 0.8701 | 0.8562 | 0.8221   |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	beston91/gpt2-xl-ft-logits-5k | 
	beston91 | 2022-03-18T02:54:46Z | 3 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "gpt2",
  "text-generation",
  "generated_from_trainer",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-17T23:54:46Z | 
	---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-vanilla-debiased-5000
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-vanilla-debiased-5000
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.0371
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log        | 0.99  | 27   | 6.1985          |
| No log        | 1.99  | 54   | 6.4583          |
| No log        | 2.99  | 81   | 6.7709          |
| No log        | 3.99  | 108  | 7.0371          |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	brad1141/Longformer-finetuned-comp5 | 
	brad1141 | 2022-03-18T02:21:19Z | 6 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "longformer",
  "token-classification",
  "generated_from_trainer",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	token-classification | 2022-03-17T23:09:34Z | 
	---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Longformer-finetuned-comp5
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Longformer-finetuned-comp5
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8180
- Precision: 0.5680
- Recall: 0.7490
- F1: 0.6430
- Accuracy: 0.6430
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1     | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.8296        | 1.0   | 1012 | 0.5801          | 0.4806    | 0.6633 | 0.5448 | 0.5448   |
| 0.5367        | 2.0   | 2024 | 0.5386          | 0.5617    | 0.7042 | 0.6172 | 0.6172   |
| 0.4109        | 3.0   | 3036 | 0.5755          | 0.5590    | 0.7261 | 0.6248 | 0.6248   |
| 0.3088        | 4.0   | 4048 | 0.6167          | 0.5775    | 0.7394 | 0.6435 | 0.6435   |
| 0.2234        | 5.0   | 5060 | 0.7098          | 0.5626    | 0.7477 | 0.6370 | 0.6370   |
| 0.1637        | 6.0   | 6072 | 0.7399          | 0.5742    | 0.7413 | 0.6438 | 0.6438   |
| 0.1236        | 7.0   | 7084 | 0.8180          | 0.5680    | 0.7490 | 0.6430 | 0.6430   |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	anton-l/xtreme_s_xlsr_300m_minds14_old_splits | 
	anton-l | 2022-03-17T22:23:22Z | 8 | 1 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "wav2vec2",
  "audio-classification",
  "automatic-speech-recognition",
  "google/xtreme_s",
  "generated_from_trainer",
  "license:apache-2.0",
  "endpoints_compatible",
  "region:us"
] | 
	automatic-speech-recognition | 2022-03-14T18:02:05Z | 
	---
license: apache-2.0
tags:
- automatic-speech-recognition
- google/xtreme_s
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: xtreme_s_xlsr_minds14
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_minds14
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - MINDS14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2890
- F1: 0.9474
- Accuracy: 0.9470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1     | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 2.551         | 2.7   | 200  | 2.5855          | 0.0407 | 0.1201   |
| 1.6934        | 5.41  | 400  | 1.5072          | 0.5862 | 0.6085   |
| 0.5914        | 8.11  | 600  | 0.7274          | 0.8270 | 0.8232   |
| 0.3896        | 10.81 | 800  | 0.4402          | 0.8905 | 0.8890   |
| 0.5052        | 13.51 | 1000 | 0.4483          | 0.8837 | 0.8829   |
| 0.4806        | 16.22 | 1200 | 0.4981          | 0.8784 | 0.8787   |
| 0.2103        | 18.92 | 1400 | 0.4957          | 0.8810 | 0.8817   |
| 0.4198        | 21.62 | 1600 | 0.5161          | 0.8927 | 0.8921   |
| 0.11          | 24.32 | 1800 | 0.4456          | 0.8923 | 0.8902   |
| 0.1233        | 27.03 | 2000 | 0.3858          | 0.9016 | 0.9012   |
| 0.1827        | 29.73 | 2200 | 0.3765          | 0.9162 | 0.9159   |
| 0.1235        | 32.43 | 2400 | 0.3716          | 0.9134 | 0.9128   |
| 0.1873        | 35.14 | 2600 | 0.3080          | 0.9314 | 0.9311   |
| 0.017         | 37.84 | 2800 | 0.2629          | 0.9415 | 0.9409   |
| 0.0436        | 40.54 | 3000 | 0.3159          | 0.9397 | 0.9390   |
| 0.0455        | 43.24 | 3200 | 0.2963          | 0.9393 | 0.9390   |
| 0.046         | 45.95 | 3400 | 0.2914          | 0.9457 | 0.9451   |
| 0.0042        | 48.65 | 3600 | 0.2890          | 0.9474 | 0.9470   |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
 | 
| 
	niksss/Hinglish-HATEBERT | 
	niksss | 2022-03-17T18:43:00Z | 13 | 2 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "bert",
  "feature-extraction",
  "license:afl-3.0",
  "endpoints_compatible",
  "region:us"
] | 
	feature-extraction | 2022-03-17T17:47:30Z | 
	---
license: afl-3.0
---
Fine-Tune it using this [nb](https://colab.research.google.com/drive/1JRmrAYR0pcEWyni_VtT4SSFxZ5adlAhS?usp=sharing) | 
| 
	sanchit-gandhi/wav2vec2-2-bart-debug | 
	sanchit-gandhi | 2022-03-17T16:28:55Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "speech-encoder-decoder",
  "automatic-speech-recognition",
  "generated_from_trainer",
  "dataset:librispeech_asr",
  "endpoints_compatible",
  "region:us"
] | 
	automatic-speech-recognition | 2022-03-17T14:46:58Z | 
	---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 
This model was trained from scratch on the librispeech_asr dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
 | 
| 
	Alvenir/wav2vec2-base-da-ft-nst | 
	Alvenir | 2022-03-17T16:16:12Z | 13 | 3 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "wav2vec2",
  "automatic-speech-recognition",
  "speech-to-text",
  "da",
  "license:apache-2.0",
  "endpoints_compatible",
  "region:us"
] | 
	automatic-speech-recognition | 2022-03-15T08:16:18Z | 
	---
language: da
tags:
- speech-to-text
license: apache-2.0
---
# wav2vec2-base-da-ft-nst 
This the [alvenir wav2vec2 model](https://huggingface.co/Alvenir/wav2vec2-base-da) for Danish ASR finetuned by Alvenir on the public NST dataset. The model is trained on 16kHz, so make sure your data is the same sample rate.
The model was trained using fairseq and then converted to huggingface/transformers format.
Alvenir is always happy to help with your own open-source ASR projects, customized domain specializations or premium models. ;-)
## Usage
```Python
import soundfile as sf
import torch
from transformers import Wav2Vec2CTCTokenizer, Wav2Vec2Tokenizer, Wav2Vec2Processor, \
    Wav2Vec2ForCTC
def get_tokenizer(model_path: str) -> Wav2Vec2CTCTokenizer:
    return Wav2Vec2Tokenizer.from_pretrained(model_path)
def get_processor(model_path: str) -> Wav2Vec2Processor:
    return Wav2Vec2Processor.from_pretrained(model_path)
def load_model(model_path: str) -> Wav2Vec2ForCTC:
    return Wav2Vec2ForCTC.from_pretrained(model_path)
model_id = "Alvenir/wav2vec2-base-da-ft-nst"
model = load_model(model_id)
model.eval()
tokenizer = get_tokenizer(model_id)
processor = get_processor(model_id)
audio_file = "<path/to/audio.wav>"
audio, _ = sf.read(audio_file)
input_values = processor(audio, return_tensors="pt", padding="longest", sampling_rate=16_000).input_values
with torch.no_grad():
    logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
print(transcription)
```
## Benchmark results
This is some benchmark results on the public available datasets in Danish.
| Dataset             | WER Greedy | WER with 3-gram Language Model |
|---------------------|------------|--------------------|
| NST test            | 15,8%      | 11.9%              |
| alvenir-asr-da-eval | 19.0%      | 12.1%              |
| common_voice_80 da test | 26,3% | 19,2%                 |
 | 
| 
	saghar/TinyBERT_L-4_H-312_v2-finetuned-wikitext103 | 
	saghar | 2022-03-17T15:59:39Z | 9 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "bert",
  "fill-mask",
  "generated_from_trainer",
  "dataset:wikitext",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	fill-mask | 2022-03-17T12:52:55Z | 
	---
tags:
- generated_from_trainer
datasets:
- wikitext
model-index:
- name: TinyBERT_L-4_H-312_v2-finetuned-wikitext103
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TinyBERT_L-4_H-312_v2-finetuned-wikitext103
This model is a fine-tuned version of [nreimers/TinyBERT_L-4_H-312_v2](https://huggingface.co/nreimers/TinyBERT_L-4_H-312_v2) on the wikitext dataset.
It achieves the following results on the evaluation set:
- Loss: 6.4638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.0604        | 1.0   | 3125 | 6.6745          |
| 6.7122        | 2.0   | 6250 | 6.5061          |
| 6.6289        | 3.0   | 9375 | 6.4638          |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.3
 | 
| 
	StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_ES | 
	StivenLancheros | 2022-03-17T14:49:03Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "bert",
  "token-classification",
  "generated_from_trainer",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	token-classification | 2022-03-15T22:44:16Z | 
	---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_ES
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_ES
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2251
- Precision: 0.8276
- Recall: 0.8411
- F1: 0.8343
- Accuracy: 0.9676
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in Spanish (MT translated) and English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Three datasets (original, augmented, MT translated CRAFT) were concatenated.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step  | Validation Loss | Precision | Recall | F1     | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0549        | 1.0   | 4078  | 0.1673          | 0.8056    | 0.8112 | 0.8084 | 0.9640   |
| 0.0233        | 2.0   | 8156  | 0.1733          | 0.8321    | 0.8244 | 0.8283 | 0.9662   |
| 0.0101        | 3.0   | 12234 | 0.1972          | 0.8336    | 0.8391 | 0.8363 | 0.9678   |
| 0.0036        | 4.0   | 16312 | 0.2251          | 0.8276    | 0.8411 | 0.8343 | 0.9676   |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_EN | 
	StivenLancheros | 2022-03-17T14:45:49Z | 654 | 1 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "bert",
  "token-classification",
  "generated_from_trainer",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	token-classification | 2022-03-15T22:41:38Z | 
	---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_EN
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_EN
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2299
- Precision: 0.8122
- Recall: 0.8475
- F1: 0.8294
- Accuracy: 0.9661
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in Spanish and English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Both datasets (original, augmented) were concatenated.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step  | Validation Loss | Precision | Recall | F1     | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0542        | 1.0   | 2719  | 0.1540          | 0.7834    | 0.8300 | 0.8060 | 0.9622   |
| 0.0229        | 2.0   | 5438  | 0.1920          | 0.8092    | 0.8219 | 0.8155 | 0.9644   |
| 0.0069        | 3.0   | 8157  | 0.2054          | 0.8130    | 0.8481 | 0.8302 | 0.9656   |
| 0.0023        | 4.0   | 10876 | 0.2299          | 0.8122    | 0.8475 | 0.8294 | 0.9661   |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	newtonkwan/gpt2-xl-ft-3 | 
	newtonkwan | 2022-03-17T10:47:43Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "gpt2",
  "text-generation",
  "generated_from_trainer",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-16T23:58:55Z | 
	---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-3
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-3
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log        | 1.0   | 156  | 1.3062          |
| No log        | 2.0   | 312  | 1.3141          |
| No log        | 3.0   | 468  | 1.3810          |
| 1.1725        | 4.0   | 624  | 1.4315          |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 138.43353271484375
### Dataset Size
Size: 25000
 | 
| 
	libalabala/marian-finetuned-kde4-en-to-fr | 
	libalabala | 2022-03-17T08:13:54Z | 7 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "marian",
  "text2text-generation",
  "translation",
  "generated_from_trainer",
  "dataset:kde4",
  "license:apache-2.0",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	translation | 2022-03-16T07:09:29Z | 
	---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: marian-finetuned-kde4-en-to-fr
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	sanchit-gandhi/wav2vec2-2-rnd-no-adapter | 
	sanchit-gandhi | 2022-03-17T06:35:21Z | 6 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "speech-encoder-decoder",
  "automatic-speech-recognition",
  "generated_from_trainer",
  "dataset:librispeech_asr",
  "endpoints_compatible",
  "region:us"
] | 
	automatic-speech-recognition | 2022-03-15T19:50:28Z | 
	---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8384
- Wer: 0.1367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step  | Validation Loss | Wer    |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.2245        | 1.68  | 1500  | 6.1442          | 1.5986 |
| 5.4521        | 3.36  | 3000  | 5.4335          | 1.6439 |
| 3.3659        | 5.04  | 4500  | 3.6455          | 0.6503 |
| 1.5724        | 6.73  | 6000  | 2.3554          | 0.3386 |
| 1.4759        | 8.41  | 7500  | 1.7423          | 0.2889 |
| 1.0826        | 10.09 | 9000  | 1.3818          | 0.2209 |
| 0.6769        | 11.77 | 10500 | 1.1268          | 0.1737 |
| 0.7348        | 13.45 | 12000 | 0.9990          | 0.1575 |
| 0.5419        | 15.13 | 13500 | 0.9435          | 0.1560 |
| 0.4212        | 16.82 | 15000 | 0.8678          | 0.1405 |
| 0.3805        | 18.5  | 16500 | 0.8384          | 0.1367 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
 | 
| 
	saghar/TinyBERT_General_6L_768D-finetuned-wikitext103 | 
	saghar | 2022-03-17T06:14:16Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "bert",
  "fill-mask",
  "generated_from_trainer",
  "dataset:wikitext",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	fill-mask | 2022-03-16T22:46:35Z | 
	---
tags:
- generated_from_trainer
datasets:
- wikitext
model-index:
- name: TinyBERT_General_6L_768D-finetuned-wikitext103
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TinyBERT_General_6L_768D-finetuned-wikitext103
This model is a fine-tuned version of [huawei-noah/TinyBERT_General_6L_768D](https://huggingface.co/huawei-noah/TinyBERT_General_6L_768D) on the wikitext dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.1792        | 1.0   | 3125 | 3.5465          |
| 3.6726        | 2.0   | 6250 | 3.4226          |
| 3.6065        | 3.0   | 9375 | 3.3768          |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.3
 | 
| 
	sanchit-gandhi/wav2vec2-2-rnd-2-layer-no-adapter | 
	sanchit-gandhi | 2022-03-17T02:23:57Z | 6 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "speech-encoder-decoder",
  "automatic-speech-recognition",
  "generated_from_trainer",
  "dataset:librispeech_asr",
  "endpoints_compatible",
  "region:us"
] | 
	automatic-speech-recognition | 2022-03-15T19:50:51Z | 
	---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8365
- Wer: 0.2812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step  | Validation Loss | Wer    |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.8017        | 1.68  | 1500  | 5.7161          | 1.3220 |
| 4.5907        | 3.36  | 3000  | 4.7936          | 0.9799 |
| 3.151         | 5.04  | 4500  | 4.1610          | 0.7752 |
| 1.5166        | 6.73  | 6000  | 3.5939          | 0.5343 |
| 2.4523        | 8.41  | 7500  | 4.0013          | 0.6954 |
| 1.423         | 10.09 | 9000  | 2.6917          | 0.4476 |
| 0.7882        | 11.77 | 10500 | 2.4493          | 0.3967 |
| 1.1643        | 13.45 | 12000 | 2.0629          | 0.3234 |
| 0.5352        | 15.13 | 13500 | 2.0625          | 0.3363 |
| 0.407         | 16.82 | 15000 | 1.8378          | 0.2812 |
| 0.1162        | 18.5  | 16500 | 1.8365          | 0.2812 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
 | 
| 
	cammy/pegasus-cnn_dailymail-100-lit-evalMA-ga | 
	cammy | 2022-03-17T02:22:31Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "pegasus",
  "text2text-generation",
  "generated_from_trainer",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text2text-generation | 2022-03-17T02:06:51Z | 
	---
tags:
- generated_from_trainer
model-index:
- name: pegasus-cnn_dailymail-100-lit-evalMA-ga
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-cnn_dailymail-100-lit-evalMA-ga
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
 | 
| 
	nbroad/mrasp2-6e6d-no-mono | 
	nbroad | 2022-03-17T00:13:23Z | 6 | 2 | 
	transformers | 
	[
  "transformers",
  "fsmt",
  "text2text-generation",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text2text-generation | 2022-03-02T23:29:05Z | 
	Model not mine.  
Taken from here https://github.com/PANXiao1994/mRASP2 | 
| 
	Guen/guen_test_prompt_generation | 
	Guen | 2022-03-16T22:33:29Z | 5 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "t5",
  "text2text-generation",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text2text-generation | 2022-03-16T22:18:10Z | 
	A small language generation head to generate text from a prompt. 
Fine-tuned on the t5-base model with the aeslc dataset. | 
| 
	newtonkwan/gpt2-xl-ft-0 | 
	newtonkwan | 2022-03-16T21:58:33Z | 5 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "gpt2",
  "text-generation",
  "generated_from_trainer",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-16T14:26:02Z | 
	---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-0
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-0
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log        | 0.96  | 6    | 5.1701          |
| No log        | 1.96  | 12   | 4.1214          |
| No log        | 2.96  | 18   | 2.5305          |
| No log        | 3.96  | 24   | 2.0324          |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 17.31455421447754
### Dataset Size
Size: 1000 | 
| 
	ScandinavianMrT/gpt2_prefinetune_IMDB | 
	ScandinavianMrT | 2022-03-16T19:05:43Z | 3 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "gpt2",
  "text-generation",
  "generated_from_trainer",
  "license:mit",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-16T18:44:37Z | 
	---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2_prefinetune_IMDB
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_prefinetune_IMDB
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7838        | 1.0   | 2997 | 3.6875          |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	horsbug98/Part_1_mBERT_Model_E1 | 
	horsbug98 | 2022-03-16T18:48:12Z | 23 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "bert",
  "question-answering",
  "generated_from_trainer",
  "dataset:tydiqa",
  "endpoints_compatible",
  "region:us"
] | 
	question-answering | 2022-03-16T18:20:20Z | 
	---
tags:
- generated_from_trainer
datasets:
- tydiqa
model-index:
- name: debug_bert_finetuned_dutch_task2_1
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# debug_bert_finetuned_dutch_task2_1
This model is a fine-tuned version of [henryk/bert-base-multilingual-cased-finetuned-dutch-squad2](https://huggingface.co/henryk/bert-base-multilingual-cased-finetuned-dutch-squad2) on the tydiqa secondary_task dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
 | 
| 
	DrishtiSharma/poem-gen-gpt2-small-spanish | 
	DrishtiSharma | 2022-03-16T18:46:26Z | 3 | 1 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "gpt2",
  "text-generation",
  "generated_from_trainer",
  "license:apache-2.0",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-16T17:46:36Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: poem-gen-gpt2-small-spanish
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poem-gen-gpt2-small-spanish
This model is a fine-tuned version of [datificate/gpt2-small-spanish](https://huggingface.co/datificate/gpt2-small-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.2121        | 1.0   | 2569 | 3.9954          |
| 4.0612        | 2.0   | 5138 | 3.9375          |
| 3.9988        | 3.0   | 7707 | 3.9229          |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	ScandinavianMrT/distilbert-IMDB-POS | 
	ScandinavianMrT | 2022-03-16T18:15:20Z | 3 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "distilbert",
  "text-classification",
  "generated_from_trainer",
  "license:apache-2.0",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2022-03-16T17:42:29Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-IMDB
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-IMDB
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1905
- Accuracy: 0.9295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1928        | 1.0   | 2000 | 0.1905          | 0.9295   |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	DrishtiSharma/poem-gen-t5-small_v1 | 
	DrishtiSharma | 2022-03-16T17:30:57Z | 3 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "t5",
  "text2text-generation",
  "generated_from_trainer",
  "license:apache-2.0",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text2text-generation | 2022-03-16T11:37:54Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: poem-gen-t5-small_v1
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poem-gen-t5-small_v1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step   | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 3.5397        | 0.32  | 5000   | 3.3474          |
| 3.4107        | 0.63  | 10000  | 3.2260          |
| 3.3236        | 0.95  | 15000  | 3.1414          |
| 3.25          | 1.26  | 20000  | 3.0884          |
| 3.2055        | 1.58  | 25000  | 3.0461          |
| 3.1677        | 1.89  | 30000  | 3.0057          |
| 3.1189        | 2.21  | 35000  | 2.9786          |
| 3.0972        | 2.53  | 40000  | 2.9533          |
| 3.0855        | 2.84  | 45000  | 2.9318          |
| 3.0364        | 3.16  | 50000  | 2.9124          |
| 3.0125        | 3.47  | 55000  | 2.8962          |
| 2.9987        | 3.79  | 60000  | 2.8812          |
| 2.9734        | 4.1   | 65000  | 2.8675          |
| 2.9782        | 4.42  | 70000  | 2.8563          |
| 2.9492        | 4.74  | 75000  | 2.8446          |
| 2.9383        | 5.05  | 80000  | 2.8360          |
| 2.9322        | 5.37  | 85000  | 2.8250          |
| 2.9193        | 5.68  | 90000  | 2.8159          |
| 2.9119        | 6.0   | 95000  | 2.8095          |
| 2.8893        | 6.31  | 100000 | 2.8046          |
| 2.8927        | 6.63  | 105000 | 2.7975          |
| 2.8944        | 6.95  | 110000 | 2.7879          |
| 2.8568        | 7.26  | 115000 | 2.7856          |
| 2.8648        | 7.58  | 120000 | 2.7808          |
| 2.8534        | 7.89  | 125000 | 2.7737          |
| 2.8563        | 8.21  | 130000 | 2.7696          |
| 2.8387        | 8.52  | 135000 | 2.7664          |
| 2.8328        | 8.84  | 140000 | 2.7643          |
| 2.8137        | 9.16  | 145000 | 2.7615          |
| 2.8058        | 9.47  | 150000 | 2.7548          |
| 2.8138        | 9.79  | 155000 | 2.7547          |
| 2.8098        | 10.1  | 160000 | 2.7506          |
| 2.7944        | 10.42 | 165000 | 2.7479          |
| 2.809         | 10.73 | 170000 | 2.7443          |
| 2.7897        | 11.05 | 175000 | 2.7431          |
| 2.7955        | 11.37 | 180000 | 2.7403          |
| 2.793         | 11.68 | 185000 | 2.7403          |
| 2.798         | 12.0  | 190000 | 2.7351          |
| 2.7955        | 12.31 | 195000 | 2.7351          |
| 2.785         | 12.63 | 200000 | 2.7329          |
| 2.7701        | 12.94 | 205000 | 2.7329          |
| 2.7744        | 13.26 | 210000 | 2.7317          |
| 2.7827        | 13.58 | 215000 | 2.7295          |
| 2.7793        | 13.89 | 220000 | 2.7303          |
| 2.7782        | 14.21 | 225000 | 2.7298          |
| 2.7762        | 14.52 | 230000 | 2.7289          |
| 2.7719        | 14.84 | 235000 | 2.7292          |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	anton-l/xls-r-300m-bart-base | 
	anton-l | 2022-03-16T17:27:16Z | 3 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "speech-encoder-decoder",
  "automatic-speech-recognition",
  "license:apache-2.0",
  "endpoints_compatible",
  "region:us"
] | 
	automatic-speech-recognition | 2022-03-16T16:36:37Z | 
	---
license: apache-2.0
---
 | 
| 
	horsbug98/Part_2_mBERT_Model_E2 | 
	horsbug98 | 2022-03-16T17:25:02Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "bert",
  "question-answering",
  "generated_from_trainer",
  "dataset:tydiqa",
  "license:apache-2.0",
  "endpoints_compatible",
  "region:us"
] | 
	question-answering | 2022-03-16T17:04:38Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tydiqa
model-index:
- name: debug_mbert_task2_2
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# debug_mbert_task2_2
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tydiqa secondary_task dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
 | 
| 
	horsbug98/Part_2_mBERT_Model_E1 | 
	horsbug98 | 2022-03-16T17:01:57Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "bert",
  "question-answering",
  "generated_from_trainer",
  "dataset:tydiqa",
  "license:apache-2.0",
  "endpoints_compatible",
  "region:us"
] | 
	question-answering | 2022-03-16T16:53:45Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tydiqa
model-index:
- name: debug_mbert_task2_1
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# debug_mbert_task2_1
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tydiqa secondary_task dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
 | 
| 
	adityavithaldas/Fashion_Category_Classifier | 
	adityavithaldas | 2022-03-16T16:05:18Z | 0 | 4 | null | 
	[
  "license:cc-by-4.0",
  "region:us"
] | null | 2022-03-16T15:59:51Z | 
	---
license: cc-by-4.0
---
This model uses the Deep Fashion dataset in order to create a category classifier among the 50 or so provided categories. 
https://mmlab.ie.cuhk.edu.hk/projects/DeepFashion.html
This model leverages the ViT (Vision transformer), loaded with the custom dataset and the 50 odd categoes to which they are assigned. The objective here, is to expand the same and get to 
a. An accuracy level of 90+ in the top 5 categorizes
b. An accuracy of 70+ overall. 
In addition, we would also look forward to creating attribute extractors, to extract key attributes (primary color, checked, sleeve, collar etc) as we proceed
 | 
| 
	Neulvo/marian-finetuned-kde4-en-to-fr | 
	Neulvo | 2022-03-16T15:04:48Z | 7 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "marian",
  "text2text-generation",
  "translation",
  "generated_from_trainer",
  "dataset:kde4",
  "license:apache-2.0",
  "model-index",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	translation | 2022-03-16T10:57:21Z | 
	---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
  results:
  - task:
      name: Sequence-to-sequence Language Modeling
      type: text2text-generation
    dataset:
      name: kde4
      type: kde4
      args: en-fr
    metrics:
    - name: Bleu
      type: bleu
      value: 52.893830905210194
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8564
- Bleu: 52.8938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
 | 
| 
	newtonkwan/gpt2-xl-ft-with-non-challenging-0.8 | 
	newtonkwan | 2022-03-16T13:27:02Z | 5 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "gpt2",
  "text-generation",
  "generated_from_trainer",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2022-03-16T12:05:34Z | 
	---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-with-non-challenging-0.8
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-with-non-challenging-0.8
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3121
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log        | 1.0   | 1    | 5.4443          |
| No log        | 2.0   | 2    | 5.4221          |
| No log        | 3.0   | 3    | 5.3779          |
| No log        | 4.0   | 4    | 5.3121          |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 10 | 
| 
	anton-l/xtreme_s_xlsr_mls_upd | 
	anton-l | 2022-03-16T13:13:22Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "wav2vec2",
  "automatic-speech-recognition",
  "mls",
  "google/xtreme_s",
  "generated_from_trainer",
  "pl",
  "dataset:xtreme_s",
  "license:apache-2.0",
  "endpoints_compatible",
  "region:us"
] | 
	automatic-speech-recognition | 2022-03-16T12:53:26Z | 
	---
language:
- pl
license: apache-2.0
tags:
- mls
- google/xtreme_s
- generated_from_trainer
datasets:
- xtreme_s
model-index:
- name: xtreme_s_xlsr_mls_upd
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_mls_upd
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - MLS.PL dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1489
- Wer: 1.0
- Cer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:---:|:---:|
| 3.4678        | 0.59  | 20   | 3.4581          | 1.0 | 1.0 |
| 3.1713        | 1.18  | 40   | 3.1816          | 1.0 | 1.0 |
| 3.134         | 1.76  | 60   | 3.1538          | 1.0 | 1.0 |
| 3.132         | 2.35  | 80   | 3.1411          | 1.0 | 1.0 |
| 3.1295        | 2.94  | 100  | 3.1373          | 1.0 | 1.0 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
 | 
| 
	mazenalasali/layoutlmv2-finetuned-funsd-test | 
	mazenalasali | 2022-03-16T13:02:29Z | 5 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "layoutlmv2",
  "token-classification",
  "generated_from_trainer",
  "license:cc-by-nc-sa-4.0",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	token-classification | 2022-03-16T11:54:16Z | 
	---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-finetuned-funsd-test
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-funsd-test
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.8.0+cu101
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	krinal214/xlm-all | 
	krinal214 | 2022-03-16T13:01:05Z | 5 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "xlm-roberta",
  "question-answering",
  "generated_from_trainer",
  "dataset:tydiqa",
  "license:mit",
  "endpoints_compatible",
  "region:us"
] | 
	question-answering | 2022-03-16T12:19:11Z | 
	---
license: mit
tags:
- generated_from_trainer
datasets:
- tydiqa
model-index:
- name: xlm-all-final
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-all-final
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the tydiqa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4483        | 1.0   | 3381 | 0.6038          |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
 | 
| 
	RobertoMCA97/xlm-roberta-base-finetuned-panx-it | 
	RobertoMCA97 | 2022-03-16T12:56:38Z | 5 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "xlm-roberta",
  "token-classification",
  "generated_from_trainer",
  "dataset:xtreme",
  "license:mit",
  "model-index",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	token-classification | 2022-03-16T12:41:09Z | 
	---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
  results:
  - task:
      name: Token Classification
      type: token-classification
    dataset:
      name: xtreme
      type: xtreme
      args: PAN-X.it
    metrics:
    - name: F1
      type: f1
      value: 0.822805578342904
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2323
- F1: 0.8228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1     |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8126        | 1.0   | 70   | 0.3361          | 0.7231 |
| 0.2995        | 2.0   | 140  | 0.2526          | 0.8079 |
| 0.1865        | 3.0   | 210  | 0.2323          | 0.8228 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	krinal214/xlm-3lang | 
	krinal214 | 2022-03-16T12:55:35Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "xlm-roberta",
  "question-answering",
  "generated_from_trainer",
  "dataset:tydiqa",
  "license:mit",
  "endpoints_compatible",
  "region:us"
] | 
	question-answering | 2022-03-16T12:40:10Z | 
	---
license: mit
tags:
- generated_from_trainer
datasets:
- tydiqa
model-index:
- name: xlm-eng-beng-tel
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-eng-beng-tel
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the tydiqa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2927        | 1.0   | 810  | 0.7303          |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
 | 
| 
	krinal214/zero_shot | 
	krinal214 | 2022-03-16T12:41:46Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "bert",
  "question-answering",
  "generated_from_trainer",
  "dataset:squad",
  "license:apache-2.0",
  "endpoints_compatible",
  "region:us"
] | 
	question-answering | 2022-03-16T11:37:09Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: zero_last
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zero_last
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9816        | 1.0   | 5557 | 1.9190          |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
 | 
| 
	RobertoMCA97/xlm-roberta-base-finetuned-panx-de-fr | 
	RobertoMCA97 | 2022-03-16T12:24:41Z | 3 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "xlm-roberta",
  "token-classification",
  "generated_from_trainer",
  "license:mit",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	token-classification | 2022-03-16T12:03:40Z | 
	---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1667
- F1: 0.8582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1     |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2885        | 1.0   | 715  | 0.1817          | 0.8287 |
| 0.1497        | 2.0   | 1430 | 0.1618          | 0.8442 |
| 0.0944        | 3.0   | 2145 | 0.1667          | 0.8582 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
 | 
| 
	ixa-ehu/roberta-eus-euscrawl-base-cased | 
	ixa-ehu | 2022-03-16T11:48:42Z | 14 | 2 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "roberta",
  "fill-mask",
  "basque",
  "eu",
  "arxiv:2203.08111",
  "license:cc-by-nc-4.0",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	fill-mask | 2022-03-16T09:54:43Z | 
	---
language: eu
license: cc-by-nc-4.0
tags:
- basque
- roberta
---
# Roberta-eus Euscrawl base cased
This is a RoBERTa model for Basque model presented in [Does corpus quality really matter for low-resource languages?](https://arxiv.org/abs/2203.08111). There are several models for Basque using the RoBERTa architecture, which are pre-trained using different corpora:
- roberta-eus-euscrawl-base-cased: Basque RoBERTa trained on Euscrawl, a corpus created using tailored crawling from Basque sites. EusCrawl contains 12,528k documents and 423M tokens.
- roberta-eus-euscrawl-large-cased: Basque RoBERTa large trained on EusCrawl.
- roberta-eus-mC4-base-cased: Basque RoBERTa trained on the Basque portion of mc4 dataset.
- roberta-eus-CC100-base-cased: Basque RoBERTa trained on  Basque portion of cc100 dataset.
The models have been tested on five different downstream tasks for Basque: Topic classification, Sentiment analysis, Stance detection, Named Entity Recognition (NER), and Question Answering (refer to the [paper](https://arxiv.org/abs/2203.08111) for more details). See summary of results below:
| Model                            | Topic class. | Sentiment | Stance det. |     NER  |     QA   | Average  |
|----------------------------------|--------------|-----------|-------------|----------|----------|----------|
| roberta-eus-euscrawl-base-cased  |         76.2 |      77.7 |        57.4 |    86.8  |    34.6  |    66.5  |
| roberta-eus-euscrawl-large-cased |     **77.6** |      78.8 |        62.9 | **87.2** | **38.3** | **69.0** |
| roberta-eus-mC4-base-cased       |         75.3 |  **80.4** |        59.1 |    86.0  |    35.2  |    67.2  |
| roberta-eus-CC100-base-cased     |         76.2 |      78.8 |    **63.4** |    85.2  |    35.8  |    67.9  |
If you use any of these models, please cite the following paper:
```
@misc{artetxe2022euscrawl,
 title={Does corpus quality really matter for low-resource languages?},
 author={Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri,
         Olatz Perez-de-Viñaspre, Aitor Soroa},
 year={2022},
 eprint={2203.08111},
 archivePrefix={arXiv},
 primaryClass={cs.CL}
}
```
 | 
| 
	tae898/emoberta-large | 
	tae898 | 2022-03-16T11:01:48Z | 1,014 | 7 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "roberta",
  "text-classification",
  "emoberta",
  "en",
  "dataset:MELD",
  "dataset:IEMOCAP",
  "arxiv:2108.12009",
  "license:mit",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2022-03-14T20:33:23Z | 
	---
language: en
tags:
- emoberta
- roberta
license: mit
datasets:
- MELD
- IEMOCAP
---
Check https://github.com/tae898/erc for the details
[Watch a demo video!](https://youtu.be/qbr7fNd6J28)
# Emotion Recognition in Coversation (ERC)
[](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on?p=emoberta-speaker-aware-emotion-recognition-in)
[](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on-meld?p=emoberta-speaker-aware-emotion-recognition-in)
At the moment, we only use the text modality to correctly classify the emotion of the utterances.The experiments were carried out on two datasets (i.e. MELD and IEMOCAP)
## Prerequisites
1. An x86-64 Unix or Unix-like machine
1. Python 3.8 or higher
1. Running in a virtual environment (e.g., conda, virtualenv, etc.) is highly recommended so that you don't mess up with the system python.
1. [`multimodal-datasets` repo](https://github.com/tae898/multimodal-datasets) (submodule)
1. pip install -r requirements.txt
## EmoBERTa training
First configure the hyper parameters and the dataset in `train-erc-text.yaml` and then,
In this directory run the below commands. I recommend you to run this in a virtualenv.
```sh
python train-erc-text.py
```
This will subsequently call `train-erc-text-hp.py` and `train-erc-text-full.py`.
## Results on the test split (weighted f1 scores)
| Model    |                                 |   MELD    |  IEMOCAP  |
| -------- | ------------------------------- | :-------: | :-------: |
| EmoBERTa | No past and future utterances   |   63.46   |   56.09   |
|          | Only past utterances            |   64.55   | **68.57** |
|          | Only future utterances          |   64.23   |   66.56   |
|          | Both past and future utterances | **65.61** |   67.42   |
|          | → *without speaker names*       |   65.07   |   64.02   |
Above numbers are the mean values of five random seed runs.
If you want to see more training test details, check out `./results/`
If you want to download the trained checkpoints and stuff, then [here](https://surfdrive.surf.nl/files/index.php/s/khREwk4MUI7MSnO/download) is where you can download them. It's a pretty big zip file.
## Deployment
### Huggingface
We have released our models on huggingface:
- [emoberta-base](https://huggingface.co/tae898/emoberta-base)
- [emoberta-large](https://huggingface.co/tae898/emoberta-large)
They are based on [RoBERTa-base](https://huggingface.co/roberta-base) and [RoBERTa-large](https://huggingface.co/roberta-large), respectively. They were trained on [both MELD and IEMOCAP datasets](utterance-ordered-MELD_IEMOCAP.json). Our deployed models are neither speaker-aware nor take previous utterances into account, meaning that it only classifies one utterance at a time without the speaker information (e.g., "I love you").
### Flask app
You can either run the Flask RESTful server app as a docker container or just as a python script.
1. Running the app as a docker container **(recommended)**.
   There are four images. Take what you need:
   - `docker run -it --rm -p 10006:10006 tae898/emoberta-base`
   - `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-base-cuda`
   - `docker run -it --rm -p 10006:10006 tae898/emoberta-large`
   - `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-large-cuda`
1. Running the app in your python environment:
   This method is less recommended than the docker one.
   Run `pip install -r requirements-deploy.txt` first.<br>
   The [`app.py`](app.py) is a flask RESTful server. The usage is below:
   ```console
   app.py [-h] [--host HOST] [--port PORT] [--device DEVICE] [--model-type MODEL_TYPE]
   ```
   For example:
   ```sh
   python app.py --host 0.0.0.0 --port 10006 --device cpu --model-type emoberta-base
   ```
### Client
Once the app is running, you can send a text to the server. First install the necessary packages: `pip install -r requirements-client.txt`, and the run the [client.py](client.py). The usage is as below:
```console
client.py [-h] [--url-emoberta URL_EMOBERTA] --text TEXT
```
For example:
```sh
python client.py --text "Emotion recognition is so cool\!"
```
will give you:
```json
{
    "neutral": 0.0049800905,
    "joy": 0.96399665,
    "surprise": 0.018937444,
    "anger": 0.0071516023,
    "sadness": 0.002021492,
    "disgust": 0.001495996,
    "fear": 0.0014167271
}
```
## Troubleshooting
The best way to find and solve your problems is to see in the github issue tab. If you can't find what you want, feel free to raise an issue. We are pretty responsive.
## Contributing
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**.
1. Fork the Project
1. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
1. Run `make style && quality` in the root repo directory, to ensure code quality.
1. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
1. Push to the Branch (`git push origin feature/AmazingFeature`)
1. Open a Pull Request
## Cite our work
Check out the [paper](https://arxiv.org/abs/2108.12009).
```bibtex
@misc{kim2021emoberta,
      title={EmoBERTa: Speaker-Aware Emotion Recognition in Conversation with RoBERTa}, 
      author={Taewoon Kim and Piek Vossen},
      year={2021},
      eprint={2108.12009},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```
[](https://zenodo.org/badge/latestdoi/328375452)<br>
## Authors
- [Taewoon Kim](https://taewoonkim.com/)
## License
[MIT](https://choosealicense.com/licenses/mit/)
 | 
| 
	tae898/emoberta-base | 
	tae898 | 2022-03-16T11:01:29Z | 124 | 5 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "roberta",
  "text-classification",
  "emoberta",
  "en",
  "dataset:MELD",
  "dataset:IEMOCAP",
  "arxiv:2108.12009",
  "license:mit",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2022-03-14T20:03:08Z | 
	---
language: en
tags:
- emoberta
- roberta
license: mit
datasets:
- MELD
- IEMOCAP
---
Check https://github.com/tae898/erc for the details
[Watch a demo video!](https://youtu.be/qbr7fNd6J28)
# Emotion Recognition in Coversation (ERC)
[](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on?p=emoberta-speaker-aware-emotion-recognition-in)
[](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on-meld?p=emoberta-speaker-aware-emotion-recognition-in)
At the moment, we only use the text modality to correctly classify the emotion of the utterances.The experiments were carried out on two datasets (i.e. MELD and IEMOCAP)
## Prerequisites
1. An x86-64 Unix or Unix-like machine
1. Python 3.8 or higher
1. Running in a virtual environment (e.g., conda, virtualenv, etc.) is highly recommended so that you don't mess up with the system python.
1. [`multimodal-datasets` repo](https://github.com/tae898/multimodal-datasets) (submodule)
1. pip install -r requirements.txt
## EmoBERTa training
First configure the hyper parameters and the dataset in `train-erc-text.yaml` and then,
In this directory run the below commands. I recommend you to run this in a virtualenv.
```sh
python train-erc-text.py
```
This will subsequently call `train-erc-text-hp.py` and `train-erc-text-full.py`.
## Results on the test split (weighted f1 scores)
| Model    |                                 |   MELD    |  IEMOCAP  |
| -------- | ------------------------------- | :-------: | :-------: |
| EmoBERTa | No past and future utterances   |   63.46   |   56.09   |
|          | Only past utterances            |   64.55   | **68.57** |
|          | Only future utterances          |   64.23   |   66.56   |
|          | Both past and future utterances | **65.61** |   67.42   |
|          | → *without speaker names*       |   65.07   |   64.02   |
Above numbers are the mean values of five random seed runs.
If you want to see more training test details, check out `./results/`
If you want to download the trained checkpoints and stuff, then [here](https://surfdrive.surf.nl/files/index.php/s/khREwk4MUI7MSnO/download) is where you can download them. It's a pretty big zip file.
## Deployment
### Huggingface
We have released our models on huggingface:
- [emoberta-base](https://huggingface.co/tae898/emoberta-base)
- [emoberta-large](https://huggingface.co/tae898/emoberta-large)
They are based on [RoBERTa-base](https://huggingface.co/roberta-base) and [RoBERTa-large](https://huggingface.co/roberta-large), respectively. They were trained on [both MELD and IEMOCAP datasets](utterance-ordered-MELD_IEMOCAP.json). Our deployed models are neither speaker-aware nor take previous utterances into account, meaning that it only classifies one utterance at a time without the speaker information (e.g., "I love you").
### Flask app
You can either run the Flask RESTful server app as a docker container or just as a python script.
1. Running the app as a docker container **(recommended)**.
   There are four images. Take what you need:
   - `docker run -it --rm -p 10006:10006 tae898/emoberta-base`
   - `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-base-cuda`
   - `docker run -it --rm -p 10006:10006 tae898/emoberta-large`
   - `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-large-cuda`
1. Running the app in your python environment:
   This method is less recommended than the docker one.
   Run `pip install -r requirements-deploy.txt` first.<br>
   The [`app.py`](app.py) is a flask RESTful server. The usage is below:
   ```console
   app.py [-h] [--host HOST] [--port PORT] [--device DEVICE] [--model-type MODEL_TYPE]
   ```
   For example:
   ```sh
   python app.py --host 0.0.0.0 --port 10006 --device cpu --model-type emoberta-base
   ```
### Client
Once the app is running, you can send a text to the server. First install the necessary packages: `pip install -r requirements-client.txt`, and the run the [client.py](client.py). The usage is as below:
```console
client.py [-h] [--url-emoberta URL_EMOBERTA] --text TEXT
```
For example:
```sh
python client.py --text "Emotion recognition is so cool\!"
```
will give you:
```json
{
    "neutral": 0.0049800905,
    "joy": 0.96399665,
    "surprise": 0.018937444,
    "anger": 0.0071516023,
    "sadness": 0.002021492,
    "disgust": 0.001495996,
    "fear": 0.0014167271
}
```
## Troubleshooting
The best way to find and solve your problems is to see in the github issue tab. If you can't find what you want, feel free to raise an issue. We are pretty responsive.
## Contributing
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**.
1. Fork the Project
1. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
1. Run `make style && quality` in the root repo directory, to ensure code quality.
1. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
1. Push to the Branch (`git push origin feature/AmazingFeature`)
1. Open a Pull Request
## Cite our work
Check out the [paper](https://arxiv.org/abs/2108.12009).
```bibtex
@misc{kim2021emoberta,
      title={EmoBERTa: Speaker-Aware Emotion Recognition in Conversation with RoBERTa}, 
      author={Taewoon Kim and Piek Vossen},
      year={2021},
      eprint={2108.12009},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```
[](https://zenodo.org/badge/latestdoi/328375452)<br>
## Authors
- [Taewoon Kim](https://taewoonkim.com/)
## License
[MIT](https://choosealicense.com/licenses/mit/)
 | 
| 
	UWB-AIR/Czert-B-base-cased | 
	UWB-AIR | 2022-03-16T10:39:50Z | 615 | 3 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tf",
  "bert",
  "pretraining",
  "cs",
  "fill-mask",
  "arxiv:2103.13031",
  "endpoints_compatible",
  "region:us"
] | 
	fill-mask | 2022-03-02T23:29:05Z | 
	---
tags:
- cs
- fill-mask
---
# CZERT
This repository keeps trained Czert-B model for the paper [Czert – Czech BERT-like Model for Language Representation
](https://arxiv.org/abs/2103.13031)
For more information, see the paper
## Available Models
You can download **MLM & NSP only** pretrained models
~~[CZERT-A-v1](https://air.kiv.zcu.cz/public/CZERT-A-czert-albert-base-uncased.zip)
[CZERT-B-v1](https://air.kiv.zcu.cz/public/CZERT-B-czert-bert-base-cased.zip)~~
After some additional experiments, we found out that the tokenizers config was exported wrongly. In Czert-B-v1, the tokenizer parameter "do_lower_case"  was wrongly set to true. In Czert-A-v1 the parameter "strip_accents"  was incorrectly set to true. 
Both mistakes are repaired in v2.
[CZERT-A-v2](https://air.kiv.zcu.cz/public/CZERT-A-v2-czert-albert-base-uncased.zip)
[CZERT-B-v2](https://air.kiv.zcu.cz/public/CZERT-B-v2-czert-bert-base-cased.zip)
or choose from one of **Finetuned Models**
| | Models  |
| - | - |
| Sentiment Classification<br> (Facebook or CSFD)                                                                                                                           | [CZERT-A-sentiment-FB](https://air.kiv.zcu.cz/public/CZERT-A_fb.zip) <br> [CZERT-B-sentiment-FB](https://air.kiv.zcu.cz/public/CZERT-B_fb.zip) <br> [CZERT-A-sentiment-CSFD](https://air.kiv.zcu.cz/public/CZERT-A_csfd.zip)  <br>   [CZERT-B-sentiment-CSFD](https://air.kiv.zcu.cz/public/CZERT-B_csfd.zip) | Semantic Text Similarity <br> (Czech News Agency)                                                                                                                        | [CZERT-A-sts-CNA](https://air.kiv.zcu.cz/public/CZERT-A-sts-CNA.zip) <br> [CZERT-B-sts-CNA](https://air.kiv.zcu.cz/public/CZERT-B-sts-CNA.zip)                                                                                                                                               
| Named Entity Recognition                                                                                                                                                 | [CZERT-A-ner-CNEC](https://air.kiv.zcu.cz/public/CZERT-A-ner-CNEC-cased.zip) <br>  [CZERT-B-ner-CNEC](https://air.kiv.zcu.cz/public/CZERT-B-ner-CNEC-cased.zip) <br>[PAV-ner-CNEC](https://air.kiv.zcu.cz/public/PAV-ner-CNEC-cased.zip) <br> [CZERT-A-ner-BSNLP](https://air.kiv.zcu.cz/public/CZERT-A-ner-BSNLP-cased.zip)<br>[CZERT-B-ner-BSNLP](https://air.kiv.zcu.cz/public/CZERT-B-ner-BSNLP-cased.zip) <br>[PAV-ner-BSNLP](https://air.kiv.zcu.cz/public/PAV-ner-BSNLP-cased.zip) |
| Morphological Tagging<br> | [CZERT-A-morphtag-126k](https://air.kiv.zcu.cz/public/CZERT-A-morphtag-126k-cased.zip)<br>[CZERT-B-morphtag-126k](https://air.kiv.zcu.cz/public/CZERT-B-morphtag-126k-cased.zip)                                                                                                                                                                                                                                                                                  |
| Semantic Role Labelling                                                                                                                                                  |[CZERT-A-srl](https://air.kiv.zcu.cz/public/CZERT-A-srl-cased.zip)<br>                                              [CZERT-B-srl](https://air.kiv.zcu.cz/public/CZERT-B-srl-cased.zip)                                                                                                                                                                                                                                                                                                    |
## How to Use CZERT?
### Sentence Level Tasks
We evaluate our model on two sentence level tasks:
* Sentiment Classification,
* Semantic Text Similarity.
<!--     tokenizer = BertTokenizerFast.from_pretrained(CZERT_MODEL_PATH, strip_accents=False)  
\\tmodel = TFAlbertForSequenceClassification.from_pretrained(CZERT_MODEL_PATH, num_labels=1)
    
or
    
    self.tokenizer = BertTokenizerFast.from_pretrained(CZERT_MODEL_PATH, strip_accents=False)
    self.model_encoder = AutoModelForSequenceClassification.from_pretrained(CZERT_MODEL_PATH, from_tf=True)
     -->
\\t
### Document Level Tasks
We evaluate our model on one document level task
* Multi-label Document Classification.
### Token Level Tasks
We evaluate our model on three token level tasks:
* Named Entity Recognition,
* Morphological Tagging,
* Semantic Role Labelling. 
## Downstream Tasks Fine-tuning Results
### Sentiment Classification
|      |          mBERT           |        SlavicBERT        |         ALBERT-r         |         Czert-A         |             Czert-B              |
|:----:|:------------------------:|:------------------------:|:------------------------:|:-----------------------:|:--------------------------------:|
|  FB  | 71.72 ± 0.91   | 73.87 ± 0.50  | 59.50 ± 0.47  | 72.47 ± 0.72  | **76.55** ± **0.14** |
| CSFD | 82.80 ± 0.14   | 82.51 ± 0.14  | 75.40 ± 0.18  | 79.58 ± 0.46  | **84.79** ± **0.26** |
Average F1 results for the Sentiment Classification task. For more information, see [the paper](https://arxiv.org/abs/2103.13031). 
                 
### Semantic Text Similarity
|              |   **mBERT**    |   **Pavlov**   | **Albert-random** |  **Czert-A**   |      **Czert-B**       |
|:-------------|:--------------:|:--------------:|:-----------------:|:--------------:|:----------------------:|
| STA-CNA      | 83.335 ± 0.063 | 83.593 ± 0.050 |  43.184 ± 0.125   | 82.942 ± 0.106 | **84.345** ± **0.028** |
| STS-SVOB-img | 79.367 ± 0.486 | 79.900 ± 0.810 |  15.739 ± 2.992   | 79.444 ± 0.338 | **83.744** ± **0.395** |
| STS-SVOB-hl  | 78.833 ± 0.296 | 76.996 ± 0.305 |  33.949 ± 1.807   | 75.089 ± 0.806 |     **79.827 ± 0.469**     |
Comparison of Pearson correlation achieved using pre-trained CZERT-A, CZERT-B, mBERT, Pavlov and randomly initialised Albert on semantic text similarity. For more information see [the paper](https://arxiv.org/abs/2103.13031).
### Multi-label Document Classification
|       |    mBERT     |  SlavicBERT  |   ALBERT-r   |   Czert-A    |      Czert-B        |
|:-----:|:------------:|:------------:|:------------:|:------------:|:-------------------:|
| AUROC | 97.62 ± 0.08 | 97.80 ± 0.06 | 94.35 ± 0.13 | 97.49 ± 0.07 | **98.00** ± **0.04** |
|  F1   | 83.04 ± 0.16 | 84.08 ± 0.14 | 72.44 ± 0.22 | 82.27 ± 0.17 | **85.06** ± **0.11** |
Comparison of F1 and AUROC score achieved using pre-trained CZERT-A, CZERT-B, mBERT, Pavlov and randomly initialised Albert on multi-label document classification. For more information see [the paper](https://arxiv.org/abs/2103.13031).
### Morphological Tagging
|                        | mBERT          | Pavlov         | Albert-random  | Czert-A        | Czert-B        |
|:-----------------------|:---------------|:---------------|:---------------|:---------------|:---------------|
| Universal Dependencies | 99.176 ± 0.006 | 99.211 ± 0.008 | 96.590 ± 0.096 | 98.713 ± 0.008 | **99.300 ± 0.009** |
Comparison of F1 score achieved using pre-trained CZERT-A, CZERT-B, mBERT, Pavlov and randomly initialised Albert on morphological tagging task. For more information see [the paper](https://arxiv.org/abs/2103.13031).
### Semantic Role Labelling
<div id="tab:SRL">
|        |   mBERT    |   Pavlov   | Albert-random |  Czert-A   |  Czert-B   | dep-based | gold-dep |
|:------:|:----------:|:----------:|:-------------:|:----------:|:----------:|:---------:|:--------:|
|  span  | 78.547 ± 0.110 | 79.333 ± 0.080 |  51.365 ± 0.423   | 72.254 ± 0.172 | **81.861 ± 0.102** |    \\\\-     |    \\\\-    |
| syntax | 90.226 ± 0.224 | 90.492 ± 0.040 |  80.747 ± 0.131   | 80.319 ± 0.054 | **91.462 ± 0.062** |   85.19   |  89.52   |
SRL results – dep columns are evaluate with labelled F1 from CoNLL 2009 evaluation script, other columns are evaluated with span F1 score same as it was used for NER evaluation. For more information see [the paper](https://arxiv.org/abs/2103.13031).
</div>
### Named Entity Recognition
|            | mBERT          | Pavlov         | Albert-random  | Czert-A        | Czert-B        |
|:-----------|:---------------|:---------------|:---------------|:---------------|:---------------|
| CNEC       | **86.225 ± 0.208** | **86.565 ± 0.198** | 34.635 ± 0.343 | 72.945 ± 0.227 | 86.274 ± 0.116 |
| BSNLP 2019 | 84.006 ± 1.248 | **86.699 ± 0.370** | 19.773 ± 0.938 | 48.859 ± 0.605 | **86.729 ± 0.344** |
Comparison of f1 score achieved using pre-trained CZERT-A, CZERT-B, mBERT, Pavlov and randomly initialised Albert on named entity recognition task. For more information see [the paper](https://arxiv.org/abs/2103.13031).
## Licence
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/
## How should I cite CZERT? 
For now, please cite [the Arxiv paper](https://arxiv.org/abs/2103.13031):
```
@article{sido2021czert,
      title={Czert -- Czech BERT-like Model for Language Representation}, 
      author={Jakub Sido and Ondřej Pražák and Pavel Přibáň and Jan Pašek and Michal Seják and Miloslav Konopík},
      year={2021},
      eprint={2103.13031},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      journal={arXiv preprint arXiv:2103.13031},
}
```
 | 
			Subsets and Splits
				
	
				
			
				
Filtered Qwen2.5 Distill Models
												Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
													
Filtered Model Cards Count
												Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
													
Filtered Distill Qwen 7B Models
												Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
													
Filtered Qwen-7b Model Cards
												The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
													
Filtered Qwen 7B Model Cards
												The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
													
Qwen 7B Distilled Models
												The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
													
Qwen 7B Distilled Model Cards
												The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
													
Qwen 7B Distilled Models
												Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
													
Distilled Qwen 7B Models
												The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
													
Filtered Model Cards with Distill Qwen2.
												Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
													
Filtered Model Cards with Distill Qwen 7
												The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
													
Distill Qwen 7B Model Cards
												The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.
													
