modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
smilton/mt5-large-qasrl-es-p2-question | smilton | 2022-11-29T04:36:00Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-29T03:55:16Z | ---
language:
- es
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mt5-large-qasrl-es-p2-question
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-large-qasrl-es-p2-question
This model is a fine-tuned version of [google/mt5-large](https://huggingface.co/google/mt5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.11.0
- Datasets 2.7.1
- Tokenizers 0.11.0
|
renatanerenata/bart-paraphrase1-finetuned-in-to-fo | renatanerenata | 2022-11-29T04:35:59Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-29T00:54:14Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-paraphrase1-finetuned-in-to-fo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-paraphrase1-finetuned-in-to-fo
This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Zhaohui/finetuning-misinfo-model-700-Zhaohui-1_misinfo | Zhaohui | 2022-11-29T04:10:10Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-29T03:57:58Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-misinfo-model-700-Zhaohui-1_misinfo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-misinfo-model-700-Zhaohui-1_misinfo
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5343
- Accuracy: 0.8571
- F1: 0.8571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
NSandra/distilbert-base-uncased-finetuned-ner | NSandra | 2022-11-29T04:09:17Z | 125 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-11-29T03:55:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2393
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 1 | 1.5491 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 2.0 | 2 | 1.3278 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 3.0 | 3 | 1.2393 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ryvalenza/sd-class-butterflies-32 | ryvalenza | 2022-11-29T04:00:32Z | 34 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-29T04:00:01Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(ryvalenza/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
jeraldflowers/vit_model | jeraldflowers | 2022-11-29T03:51:31Z | 188 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-11-27T05:06:17Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
widget:
- src: https://huggingface.co/jeraldflowers/vit_model/blob/main/healthy.jpeg
example_title: Healthy
- src: https://huggingface.co/jeraldflowers/vit_model/blob/main/bean_rust.jpeg
example_title: Bean Rust
model-index:
- name: vit_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0095
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1526 | 3.85 | 500 | 0.0095 | 1.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
UCSYNLP/MyanBERTa | UCSYNLP | 2022-11-29T03:35:58Z | 297 | 3 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"MyanBERTa",
"Myanmar",
"BERT",
"RoBERTa",
"my",
"dataset:MyCorpus",
"dataset:Web",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-25T06:57:10Z | ---
language: my
tags:
- MyanBERTa
- Myanmar
- BERT
- RoBERTa
license: apache-2.0
datasets:
- MyCorpus
- Web
---
## Model description
This model is a BERT based Myanmar pre-trained language model.
MyanBERTa was pre-trained for 528K steps on a word segmented Myanmar dataset consisting of 5,992,299 sentences (136M words).
As the tokenizer, byte-leve BPE tokenizer of 30,522 subword units which is learned after word segmentation is applied.
Cite this work as:
```
Aye Mya Hlaing, Win Pa Pa, "MyanBERTa: A Pre-trained Language Model For
Myanmar", In Proceedings of 2022 International Conference on Communication and Computer Research (ICCR2022), November 2022, Seoul, Republic of Korea
```
[Download Paper](https://journal-home.s3.ap-northeast-2.amazonaws.com/site/iccr2022/abs/QOHFI-0004.pdf)
|
jeraldflowers/distilroberts-base-mrpc-glue-jeraldflowers | jeraldflowers | 2022-11-29T02:57:36Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T05:30:00Z | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
widget:
- text: ["Yucaipa owned Dominick's before selling the chain to Safeway in 1998 for $ 2.5 billion.",
"Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998."]
example_title: Not Equivalent
- text: ["Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier.",
"With the scandal hanging over Stewart's company revenue the first quarter of the year dropped 15 percent from the same period a year earlier."]
example_title: Equivalent
model-index:
- name: distilroberts-base-mrpc-glue-jeraldflowers
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: train
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8431372549019608
- name: F1
type: f1
value: 0.8814814814814815
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberts-base-mrpc-glue-jeraldflowers
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.4990
- Accuracy: 0.8431
- F1: 0.8815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5289 | 1.09 | 500 | 0.5668 | 0.8211 | 0.8689 |
| 0.3675 | 2.18 | 1000 | 0.4990 | 0.8431 | 0.8815 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
neulab/omnitab-large-128shot-finetuned-wtq-128shot | neulab | 2022-11-29T02:55:31Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"tapex",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2207.03637",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | table-question-answering | 2022-11-29T02:54:00Z | ---
language: en
tags:
- tapex
- table-question-answering
datasets:
- wikitablequestions
---
# OmniTab
OmniTab is a table-based QA model proposed in [OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering](https://arxiv.org/pdf/2207.03637.pdf). The original Github repository is [https://github.com/jzbjyb/OmniTab](https://github.com/jzbjyb/OmniTab).
## Description
`neulab/omnitab-large-128shot-finetuned-wtq-128shot` (based on BART architecture) is initialized with `neulab/omnitab-large-128shot` and fine-tuned on [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions) in the 128-shot setting.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("neulab/omnitab-large-128shot-finetuned-wtq-128shot")
model = AutoModelForSeq2SeqLM.from_pretrained("neulab/omnitab-large-128shot-finetuned-wtq-128shot")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
query = "In which year did beijing host the Olympic Games?"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# [' 2008']
```
## Reference
```bibtex
@inproceedings{jiang-etal-2022-omnitab,
title = "{O}mni{T}ab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering",
author = "Jiang, Zhengbao and Mao, Yi and He, Pengcheng and Neubig, Graham and Chen, Weizhu",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
}
```
|
neulab/omnitab-large-1024shot-finetuned-wtq-1024shot | neulab | 2022-11-29T02:45:55Z | 51 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"tapex",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2207.03637",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | table-question-answering | 2022-11-29T02:44:57Z | ---
language: en
tags:
- tapex
- table-question-answering
datasets:
- wikitablequestions
---
# OmniTab
OmniTab is a table-based QA model proposed in [OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering](https://arxiv.org/pdf/2207.03637.pdf). The original Github repository is [https://github.com/jzbjyb/OmniTab](https://github.com/jzbjyb/OmniTab).
## Description
`neulab/omnitab-large-1024shot-finetuned-wtq-1024shot` (based on BART architecture) is initialized with `neulab/omnitab-large-1024shot` and fine-tuned on [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions) in the 1024-shot setting.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("neulab/omnitab-large-1024shot-finetuned-wtq-1024shot")
model = AutoModelForSeq2SeqLM.from_pretrained("neulab/omnitab-large-1024shot-finetuned-wtq-1024shot")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
query = "In which year did beijing host the Olympic Games?"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# [' 2008']
```
## Reference
```bibtex
@inproceedings{jiang-etal-2022-omnitab,
title = "{O}mni{T}ab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering",
author = "Jiang, Zhengbao and Mao, Yi and He, Pengcheng and Neubig, Graham and Chen, Weizhu",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
}
```
|
npark/asr-conformer-ksponspeech | npark | 2022-11-29T02:25:40Z | 5 | 1 | null | [
"region:us"
] | null | 2022-11-29T01:26:29Z | # KsponSpeech ASR with Transformers
This repository provides pretrained end-to-end ASR models on KsponSpeech with Speechbrain v0.5.13.
Model files in this repository trained using the files is below URL, but in Speechbrain version 0.5.13.
https://github.com/speechbrain/speechbrain/tree/develop/recipes/KsponSpeech/ASR/transformer
language:
- "ko"
- ko
datasets:
- KsponSpeech
## About SpeechBrain
* Website: https://speechbrain.github.io/
* Code: https://github.com/speechbrain/speechbrain/
* HuggingFace: https://huggingface.co/speechbrain/
|
huggingtweets/elonmusk-lexfridman | huggingtweets | 2022-11-29T01:35:11Z | 118 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/956331551435960322/OaqR8pAB_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Lex Fridman</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-lexfridman</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Lex Fridman.
| Data | Elon Musk | Lex Fridman |
| --- | --- | --- |
| Tweets downloaded | 3198 | 2410 |
| Retweets | 126 | 253 |
| Short tweets | 968 | 49 |
| Tweets kept | 2104 | 2108 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/18nt3c0k/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-lexfridman's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ozchvjo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ozchvjo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-lexfridman')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
akmoyu/whisper-medium-mn | akmoyu | 2022-11-29T01:27:26Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"mn",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-27T12:12:01Z | ---
language:
- mn
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Medium Mn - akmoyu
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 42.52948885976409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Mn - akmoyu
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7233
- Wer: 42.5295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0182 | 7.94 | 1000 | 0.5995 | 46.5269 |
| 0.0027 | 15.87 | 2000 | 0.6499 | 44.2169 |
| 0.0002 | 23.81 | 3000 | 0.7057 | 42.5623 |
| 0.0001 | 31.75 | 4000 | 0.7233 | 42.5295 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.2
|
dlwh/legal-xlm-base_128k | dlwh | 2022-11-29T00:48:35Z | 4 | 2 | transformers | [
"transformers",
"roberta",
"fill-mask",
"bg",
"cs",
"da",
"de",
"el",
"en",
"es",
"et",
"fi",
"fr",
"ga",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"pl",
"pt",
"ro",
"sk",
"sl",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-11-29T00:41:54Z | ---
license: apache-2.0
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
dataset:
- joelito/MultiLegalPile_Wikipedia_Filtered
---
Huggingface thinks this is a model, but it's just a tokenizer. Trained on https://huggingface.co/datasets/joelito/MultiLegalPile_Wikipedia_Filtered
|
matan-diamond/sd-class-butterflies-32 | matan-diamond | 2022-11-29T00:47:21Z | 36 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-29T00:46:35Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(matan-diamond/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
adrien-alloreview/whisper-small-fr | adrien-alloreview | 2022-11-29T00:13:29Z | 83 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-28T22:32:23Z | ---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2226
- eval_wer: 10.0023
- eval_runtime: 65.2041
- eval_samples_per_second: 1.748
- eval_steps_per_second: 0.23
- epoch: 19.51
- step: 800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 1000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
joweyel/sd-class-butterflies-32 | joweyel | 2022-11-28T23:54:45Z | 37 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-28T23:51:15Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of (more or less) cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(datboi223/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
Serhio/sd-fine-tune-v2 | Serhio | 2022-11-28T23:43:18Z | 34 | 0 | diffusers | [
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-11-28T23:41:46Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### sd-fine-tune-v2 on Stable Diffusion via Dreambooth
#### model by Serhio
This your the Stable Diffusion model fine-tuned the sd-fine-tune-v2 concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **Bashkov Sergey**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
|
pig4431/TweetEval_BERT_5E | pig4431 | 2022-11-28T23:38:03Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T23:31:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
model-index:
- name: TweetEval_BERT_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: train
args: sentiment
metrics:
- name: Accuracy
type: accuracy
value: 0.9266666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TweetEval_BERT_5E
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5419
- Accuracy: 0.9267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6264 | 0.04 | 50 | 0.5266 | 0.74 |
| 0.5054 | 0.08 | 100 | 0.5959 | 0.6333 |
| 0.4732 | 0.12 | 150 | 0.3524 | 0.86 |
| 0.3916 | 0.16 | 200 | 0.3195 | 0.8667 |
| 0.3477 | 0.2 | 250 | 0.2878 | 0.8867 |
| 0.3116 | 0.24 | 300 | 0.2903 | 0.92 |
| 0.3039 | 0.28 | 350 | 0.2488 | 0.8933 |
| 0.2633 | 0.32 | 400 | 0.2530 | 0.92 |
| 0.2667 | 0.37 | 450 | 0.2125 | 0.9267 |
| 0.2604 | 0.41 | 500 | 0.2628 | 0.8867 |
| 0.278 | 0.45 | 550 | 0.2322 | 0.8867 |
| 0.2625 | 0.49 | 600 | 0.1903 | 0.92 |
| 0.2808 | 0.53 | 650 | 0.2400 | 0.8933 |
| 0.2396 | 0.57 | 700 | 0.2184 | 0.9067 |
| 0.2571 | 0.61 | 750 | 0.1906 | 0.9133 |
| 0.2676 | 0.65 | 800 | 0.2467 | 0.9067 |
| 0.2288 | 0.69 | 850 | 0.2038 | 0.9133 |
| 0.2959 | 0.73 | 900 | 0.1941 | 0.9 |
| 0.2619 | 0.77 | 950 | 0.2100 | 0.9333 |
| 0.2504 | 0.81 | 1000 | 0.1523 | 0.9333 |
| 0.2338 | 0.85 | 1050 | 0.1429 | 0.94 |
| 0.2529 | 0.89 | 1100 | 0.1269 | 0.94 |
| 0.2238 | 0.93 | 1150 | 0.1722 | 0.9333 |
| 0.2295 | 0.97 | 1200 | 0.1874 | 0.94 |
| 0.2089 | 1.01 | 1250 | 0.2214 | 0.9067 |
| 0.1406 | 1.06 | 1300 | 0.3410 | 0.9133 |
| 0.1587 | 1.1 | 1350 | 0.3330 | 0.9133 |
| 0.1732 | 1.14 | 1400 | 0.2716 | 0.9133 |
| 0.195 | 1.18 | 1450 | 0.3726 | 0.92 |
| 0.1777 | 1.22 | 1500 | 0.2430 | 0.9267 |
| 0.1433 | 1.26 | 1550 | 0.3011 | 0.9267 |
| 0.1333 | 1.3 | 1600 | 0.2489 | 0.9333 |
| 0.1516 | 1.34 | 1650 | 0.3340 | 0.9267 |
| 0.1774 | 1.38 | 1700 | 0.2497 | 0.8933 |
| 0.1608 | 1.42 | 1750 | 0.3234 | 0.9 |
| 0.1534 | 1.46 | 1800 | 0.3383 | 0.9133 |
| 0.1287 | 1.5 | 1850 | 0.3134 | 0.9133 |
| 0.1422 | 1.54 | 1900 | 0.3330 | 0.9 |
| 0.1578 | 1.58 | 1950 | 0.3281 | 0.9133 |
| 0.1786 | 1.62 | 2000 | 0.2939 | 0.9267 |
| 0.2019 | 1.66 | 2050 | 0.3535 | 0.9 |
| 0.1995 | 1.7 | 2100 | 0.3032 | 0.9067 |
| 0.159 | 1.75 | 2150 | 0.2598 | 0.9267 |
| 0.1493 | 1.79 | 2200 | 0.2391 | 0.9267 |
| 0.1748 | 1.83 | 2250 | 0.2258 | 0.92 |
| 0.1783 | 1.87 | 2300 | 0.2749 | 0.9133 |
| 0.1619 | 1.91 | 2350 | 0.2699 | 0.92 |
| 0.1378 | 1.95 | 2400 | 0.2776 | 0.9067 |
| 0.1529 | 1.99 | 2450 | 0.2235 | 0.9333 |
| 0.1071 | 2.03 | 2500 | 0.2841 | 0.9267 |
| 0.0812 | 2.07 | 2550 | 0.3178 | 0.9267 |
| 0.0464 | 2.11 | 2600 | 0.3567 | 0.92 |
| 0.1108 | 2.15 | 2650 | 0.2723 | 0.92 |
| 0.0845 | 2.19 | 2700 | 0.2774 | 0.9267 |
| 0.0795 | 2.23 | 2750 | 0.3027 | 0.9267 |
| 0.0403 | 2.27 | 2800 | 0.3566 | 0.9267 |
| 0.0664 | 2.31 | 2850 | 0.4015 | 0.92 |
| 0.0659 | 2.35 | 2900 | 0.4298 | 0.9067 |
| 0.1059 | 2.39 | 2950 | 0.4028 | 0.92 |
| 0.105 | 2.44 | 3000 | 0.3701 | 0.92 |
| 0.0808 | 2.48 | 3050 | 0.3206 | 0.9267 |
| 0.0811 | 2.52 | 3100 | 0.3644 | 0.9133 |
| 0.0458 | 2.56 | 3150 | 0.3781 | 0.9267 |
| 0.0764 | 2.6 | 3200 | 0.3749 | 0.9267 |
| 0.0567 | 2.64 | 3250 | 0.3995 | 0.92 |
| 0.0971 | 2.68 | 3300 | 0.3455 | 0.92 |
| 0.0579 | 2.72 | 3350 | 0.4508 | 0.92 |
| 0.0853 | 2.76 | 3400 | 0.4350 | 0.92 |
| 0.0577 | 2.8 | 3450 | 0.3804 | 0.9333 |
| 0.0732 | 2.84 | 3500 | 0.4387 | 0.92 |
| 0.0874 | 2.88 | 3550 | 0.3885 | 0.9333 |
| 0.1031 | 2.92 | 3600 | 0.3937 | 0.92 |
| 0.0335 | 2.96 | 3650 | 0.4963 | 0.8933 |
| 0.0913 | 3.0 | 3700 | 0.3827 | 0.9333 |
| 0.047 | 3.04 | 3750 | 0.4136 | 0.92 |
| 0.0531 | 3.08 | 3800 | 0.4362 | 0.92 |
| 0.0265 | 3.12 | 3850 | 0.4857 | 0.92 |
| 0.038 | 3.17 | 3900 | 0.4425 | 0.92 |
| 0.0294 | 3.21 | 3950 | 0.4347 | 0.92 |
| 0.0367 | 3.25 | 4000 | 0.4291 | 0.9333 |
| 0.0102 | 3.29 | 4050 | 0.5178 | 0.9267 |
| 0.0311 | 3.33 | 4100 | 0.4784 | 0.9267 |
| 0.0274 | 3.37 | 4150 | 0.5421 | 0.9267 |
| 0.0275 | 3.41 | 4200 | 0.5194 | 0.92 |
| 0.0795 | 3.45 | 4250 | 0.4788 | 0.92 |
| 0.0413 | 3.49 | 4300 | 0.4393 | 0.9267 |
| 0.0373 | 3.53 | 4350 | 0.4965 | 0.92 |
| 0.0303 | 3.57 | 4400 | 0.4284 | 0.9267 |
| 0.0248 | 3.61 | 4450 | 0.4476 | 0.9267 |
| 0.0557 | 3.65 | 4500 | 0.4690 | 0.92 |
| 0.0358 | 3.69 | 4550 | 0.4774 | 0.9133 |
| 0.0194 | 3.73 | 4600 | 0.4755 | 0.92 |
| 0.0473 | 3.77 | 4650 | 0.4637 | 0.92 |
| 0.0133 | 3.81 | 4700 | 0.4868 | 0.92 |
| 0.0204 | 3.86 | 4750 | 0.4886 | 0.9267 |
| 0.0338 | 3.9 | 4800 | 0.5101 | 0.9267 |
| 0.0424 | 3.94 | 4850 | 0.4812 | 0.9267 |
| 0.0237 | 3.98 | 4900 | 0.4837 | 0.9267 |
| 0.0372 | 4.02 | 4950 | 0.5000 | 0.9267 |
| 0.0254 | 4.06 | 5000 | 0.5210 | 0.92 |
| 0.024 | 4.1 | 5050 | 0.5272 | 0.92 |
| 0.0117 | 4.14 | 5100 | 0.5447 | 0.92 |
| 0.018 | 4.18 | 5150 | 0.5353 | 0.92 |
| 0.0097 | 4.22 | 5200 | 0.5415 | 0.9267 |
| 0.0151 | 4.26 | 5250 | 0.5447 | 0.9267 |
| 0.0118 | 4.3 | 5300 | 0.5285 | 0.9267 |
| 0.0004 | 4.34 | 5350 | 0.5399 | 0.9267 |
| 0.0102 | 4.38 | 5400 | 0.5552 | 0.9267 |
| 0.0012 | 4.42 | 5450 | 0.5689 | 0.92 |
| 0.02 | 4.46 | 5500 | 0.5619 | 0.9267 |
| 0.0056 | 4.5 | 5550 | 0.5784 | 0.92 |
| 0.0271 | 4.55 | 5600 | 0.5766 | 0.92 |
| 0.0191 | 4.59 | 5650 | 0.5662 | 0.92 |
| 0.0311 | 4.63 | 5700 | 0.5514 | 0.9267 |
| 0.0167 | 4.67 | 5750 | 0.5510 | 0.9267 |
| 0.0293 | 4.71 | 5800 | 0.5571 | 0.9267 |
| 0.0304 | 4.75 | 5850 | 0.5494 | 0.92 |
| 0.0161 | 4.79 | 5900 | 0.5469 | 0.9267 |
| 0.0017 | 4.83 | 5950 | 0.5468 | 0.9267 |
| 0.0176 | 4.87 | 6000 | 0.5426 | 0.9267 |
| 0.0094 | 4.91 | 6050 | 0.5402 | 0.9267 |
| 0.0041 | 4.95 | 6100 | 0.5416 | 0.9267 |
| 0.0281 | 4.99 | 6150 | 0.5419 | 0.9267 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.3.2
- Tokenizers 0.13.2
|
jiping/whisper-small-jsun2-hi | jiping | 2022-11-28T22:38:58Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-24T21:04:14Z | ---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Jsun Hi - Jiping
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 31.761618555828324
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Jsun Hi - Jiping
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2775
- Wer: 31.7616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2092 | 0.61 | 1000 | 0.3201 | 38.7666 |
| 0.1106 | 1.22 | 2000 | 0.2810 | 34.1023 |
| 0.1049 | 1.83 | 3000 | 0.2660 | 32.4812 |
| 0.052 | 2.45 | 4000 | 0.2775 | 31.7616 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
rahul77/t5-small-finetuned-rahul-summariza | rahul77 | 2022-11-28T22:11:14Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-28T22:03:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-rahul-summariza
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-rahul-summariza
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7002
- Rouge1: 29.5043
- Rouge2: 23.832
- Rougel: 27.5786
- Rougelsum: 28.404
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.123 | 1.0 | 16 | 0.8258 | 27.2788 | 21.3634 | 25.7114 | 26.7324 | 19.0 |
| 0.9067 | 2.0 | 32 | 0.7539 | 28.873 | 23.5401 | 27.2337 | 27.939 | 19.0 |
| 0.8137 | 3.0 | 48 | 0.7280 | 29.1767 | 23.6599 | 27.7065 | 28.3569 | 19.0 |
| 0.7872 | 4.0 | 64 | 0.7230 | 29.0451 | 23.4597 | 27.2762 | 28.1324 | 19.0 |
| 0.7338 | 5.0 | 80 | 0.7133 | 29.4821 | 23.8113 | 27.4912 | 28.326 | 19.0 |
| 0.6913 | 6.0 | 96 | 0.7101 | 29.4237 | 23.8523 | 27.4109 | 28.2418 | 19.0 |
| 0.6679 | 7.0 | 112 | 0.7097 | 29.4237 | 23.8523 | 27.4109 | 28.2418 | 19.0 |
| 0.6963 | 8.0 | 128 | 0.7046 | 29.4237 | 23.8523 | 27.4109 | 28.2418 | 19.0 |
| 0.6223 | 9.0 | 144 | 0.7052 | 29.4237 | 23.7633 | 27.493 | 28.3362 | 19.0 |
| 0.6494 | 10.0 | 160 | 0.7019 | 29.4237 | 23.7633 | 27.493 | 28.3362 | 19.0 |
| 0.616 | 11.0 | 176 | 0.7010 | 29.4237 | 23.7633 | 27.493 | 28.3362 | 19.0 |
| 0.6058 | 12.0 | 192 | 0.7028 | 29.4237 | 23.7633 | 27.493 | 28.3362 | 19.0 |
| 0.5964 | 13.0 | 208 | 0.6996 | 29.4237 | 23.7633 | 27.493 | 28.3362 | 19.0 |
| 0.5958 | 14.0 | 224 | 0.6997 | 29.4237 | 23.7633 | 27.493 | 28.3362 | 19.0 |
| 0.57 | 15.0 | 240 | 0.6996 | 29.5043 | 23.832 | 27.5786 | 28.404 | 19.0 |
| 0.5714 | 16.0 | 256 | 0.6998 | 29.5043 | 23.832 | 27.5786 | 28.404 | 19.0 |
| 0.5648 | 17.0 | 272 | 0.6999 | 29.5043 | 23.832 | 27.5786 | 28.404 | 19.0 |
| 0.5258 | 18.0 | 288 | 0.7005 | 29.5043 | 23.832 | 27.5786 | 28.404 | 19.0 |
| 0.5692 | 19.0 | 304 | 0.7001 | 29.5043 | 23.832 | 27.5786 | 28.404 | 19.0 |
| 0.5708 | 20.0 | 320 | 0.7002 | 29.5043 | 23.832 | 27.5786 | 28.404 | 19.0 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ThomasSimonini/ML-Agents-SnowballFight-1vs1-model | ThomasSimonini | 2022-11-28T22:07:31Z | 6 | 0 | ml-agents | [
"ml-agents",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Snowballfight-1vs1",
"region:us"
] | reinforcement-learning | 2022-11-28T21:26:07Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Snowballfight-1vs1
library_name: ml-agents
--- |
alryan1478/gpt-neo-125M-wikitext2 | alryan1478 | 2022-11-28T21:57:47Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-22T20:55:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-125M-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125M-wikitext2
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 259 | 6.4308 |
| 6.8563 | 2.0 | 518 | 6.0898 |
| 6.8563 | 3.0 | 777 | 6.0325 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
michaelmayo704/sd-class-butterflies-64 | michaelmayo704 | 2022-11-28T21:39:43Z | 34 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-28T21:38:51Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(michaelmayo704/sd-class-butterflies-64)
image = pipeline().images[0]
image
```
|
pig4431/TUF_ALBERT_5E | pig4431 | 2022-11-28T21:34:30Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T21:32:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: TUF_ALBERT_5E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TUF_ALBERT_5E
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2389
- Accuracy: 0.9533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5099 | 0.1 | 50 | 0.3861 | 0.8533 |
| 0.2985 | 0.2 | 100 | 0.2961 | 0.8933 |
| 0.2972 | 0.3 | 150 | 0.2335 | 0.9333 |
| 0.2835 | 0.4 | 200 | 0.1872 | 0.94 |
| 0.26 | 0.5 | 250 | 0.4147 | 0.9133 |
| 0.2986 | 0.59 | 300 | 0.2080 | 0.9267 |
| 0.2554 | 0.69 | 350 | 0.3984 | 0.9133 |
| 0.2306 | 0.79 | 400 | 0.2136 | 0.9333 |
| 0.2218 | 0.89 | 450 | 0.4455 | 0.8867 |
| 0.2113 | 0.99 | 500 | 0.2205 | 0.94 |
| 0.2541 | 1.09 | 550 | 0.1705 | 0.9333 |
| 0.1947 | 1.19 | 600 | 0.3264 | 0.8933 |
| 0.2409 | 1.29 | 650 | 0.2084 | 0.92 |
| 0.1968 | 1.39 | 700 | 0.2550 | 0.9267 |
| 0.172 | 1.49 | 750 | 0.2238 | 0.9467 |
| 0.1478 | 1.58 | 800 | 0.2501 | 0.9533 |
| 0.2199 | 1.68 | 850 | 0.2618 | 0.9133 |
| 0.1792 | 1.78 | 900 | 0.2109 | 0.9267 |
| 0.1831 | 1.88 | 950 | 0.2641 | 0.92 |
| 0.1534 | 1.98 | 1000 | 0.1924 | 0.94 |
| 0.1208 | 2.08 | 1050 | 0.2990 | 0.9333 |
| 0.1118 | 2.18 | 1100 | 0.4952 | 0.9 |
| 0.158 | 2.28 | 1150 | 0.1706 | 0.9533 |
| 0.1163 | 2.38 | 1200 | 0.1238 | 0.9733 |
| 0.1738 | 2.48 | 1250 | 0.1989 | 0.9467 |
| 0.1305 | 2.57 | 1300 | 0.4354 | 0.9067 |
| 0.1668 | 2.67 | 1350 | 0.1276 | 0.9667 |
| 0.1195 | 2.77 | 1400 | 0.2170 | 0.9533 |
| 0.1057 | 2.87 | 1450 | 0.2882 | 0.9333 |
| 0.1172 | 2.97 | 1500 | 0.1435 | 0.9667 |
| 0.0893 | 3.07 | 1550 | 0.1754 | 0.96 |
| 0.0582 | 3.17 | 1600 | 0.1858 | 0.96 |
| 0.0887 | 3.27 | 1650 | 0.4954 | 0.92 |
| 0.1166 | 3.37 | 1700 | 0.2356 | 0.9467 |
| 0.0518 | 3.47 | 1750 | 0.1910 | 0.96 |
| 0.0741 | 3.56 | 1800 | 0.1328 | 0.9733 |
| 0.072 | 3.66 | 1850 | 0.2769 | 0.9467 |
| 0.0534 | 3.76 | 1900 | 0.3501 | 0.94 |
| 0.0776 | 3.86 | 1950 | 0.3171 | 0.94 |
| 0.0537 | 3.96 | 2000 | 0.2138 | 0.9533 |
| 0.0683 | 4.06 | 2050 | 0.2934 | 0.94 |
| 0.015 | 4.16 | 2100 | 0.2233 | 0.9533 |
| 0.0236 | 4.26 | 2150 | 0.2673 | 0.9533 |
| 0.0357 | 4.36 | 2200 | 0.2279 | 0.96 |
| 0.0298 | 4.46 | 2250 | 0.3017 | 0.9467 |
| 0.0357 | 4.55 | 2300 | 0.2910 | 0.9467 |
| 0.0208 | 4.65 | 2350 | 0.2498 | 0.9533 |
| 0.0345 | 4.75 | 2400 | 0.2259 | 0.9667 |
| 0.0174 | 4.85 | 2450 | 0.2274 | 0.9667 |
| 0.0393 | 4.95 | 2500 | 0.2389 | 0.9533 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
anikethjr/PromoGen_K562_2080Ti_restart | anikethjr | 2022-11-28T21:24:36Z | 91 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"prophetnet",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-27T05:27:24Z | ---
tags:
- generated_from_trainer
model-index:
- name: PromoGen_K562_2080Ti_restart
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PromoGen_K562_2080Ti_restart
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.7676 | 0.49 | 2500 | 0.7383 |
| 0.7121 | 0.97 | 5000 | 0.6867 |
| 0.6914 | 1.46 | 7500 | 0.6705 |
| 0.6837 | 1.95 | 10000 | 0.6622 |
| 0.6778 | 2.44 | 12500 | 0.6558 |
| 0.6748 | 2.92 | 15000 | 0.6517 |
| 0.6676 | 3.41 | 17500 | 0.6433 |
| 0.6593 | 3.9 | 20000 | 0.6358 |
| 0.6584 | 4.38 | 22500 | 0.6320 |
| 0.6557 | 4.87 | 25000 | 0.6301 |
| 0.6523 | 5.36 | 27500 | 0.6257 |
| 0.6478 | 5.84 | 30000 | 0.6236 |
| 0.6393 | 6.33 | 32500 | 0.6145 |
| 0.6039 | 6.82 | 35000 | 0.5658 |
| 0.5616 | 7.31 | 37500 | 0.5376 |
| 0.5518 | 7.79 | 40000 | 0.5310 |
| 0.5509 | 8.28 | 42500 | 0.5273 |
| 0.5487 | 8.77 | 45000 | 0.5261 |
| 0.5479 | 9.25 | 47500 | 0.5249 |
| 0.546 | 9.74 | 50000 | 0.5242 |
| 0.5447 | 10.23 | 52500 | 0.5229 |
| 0.5439 | 10.71 | 55000 | 0.5220 |
| 0.5433 | 11.2 | 57500 | 0.5209 |
| 0.5394 | 11.69 | 60000 | 0.5162 |
| 0.5153 | 12.18 | 62500 | 0.4944 |
| 0.5137 | 12.66 | 65000 | 0.4932 |
| 0.514 | 13.15 | 67500 | 0.4924 |
| 0.5131 | 13.64 | 70000 | 0.4919 |
| 0.5104 | 14.12 | 72500 | 0.4914 |
| 0.5122 | 14.61 | 75000 | 0.4906 |
| 0.5089 | 15.1 | 77500 | 0.4901 |
| 0.5076 | 15.59 | 80000 | 0.4891 |
| 0.4986 | 16.07 | 82500 | 0.4721 |
| 0.4875 | 16.56 | 85000 | 0.4672 |
| 0.4887 | 17.05 | 87500 | 0.4669 |
| 0.4839 | 17.53 | 90000 | 0.4661 |
| 0.4849 | 18.02 | 92500 | 0.4654 |
| 0.4848 | 18.51 | 95000 | 0.4649 |
| 0.4831 | 18.99 | 97500 | 0.4646 |
| 0.4816 | 19.48 | 100000 | 0.4644 |
| 0.4808 | 19.97 | 102500 | 0.4637 |
| 0.4812 | 20.46 | 105000 | 0.4634 |
| 0.4813 | 20.94 | 107500 | 0.4633 |
| 0.4818 | 21.43 | 110000 | 0.4631 |
| 0.4813 | 21.92 | 112500 | 0.4629 |
| 0.4782 | 22.4 | 115000 | 0.4628 |
| 0.4804 | 22.89 | 117500 | 0.4626 |
| 0.4815 | 23.38 | 120000 | 0.4625 |
| 0.4812 | 23.87 | 122500 | 0.4625 |
| 0.4785 | 24.35 | 125000 | 0.4624 |
| 0.4795 | 24.84 | 127500 | 0.4624 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.0
- Tokenizers 0.13.0.dev0
|
Inayat/Fine_tune_whisper_small | Inayat | 2022-11-28T21:14:32Z | 79 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-14T19:18:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Fine_tune_whisper_small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fine_tune_whisper_small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8238
- Wer: 42.9362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 900
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2994 | 3.92 | 200 | 0.6607 | 44.0797 |
| 0.0201 | 7.84 | 400 | 0.7371 | 42.6042 |
| 0.002 | 11.76 | 600 | 0.8027 | 42.5304 |
| 0.0011 | 15.69 | 800 | 0.8238 | 42.9362 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
pig4431/TweetEval_DistilBERT_5E | pig4431 | 2022-11-28T21:09:36Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T21:03:38Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
model-index:
- name: TweetEval_DistilBERT_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: train
args: sentiment
metrics:
- name: Accuracy
type: accuracy
value: 0.9133333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TweetEval_DistilBERT_5E
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4043
- Accuracy: 0.9133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5747 | 0.04 | 50 | 0.4843 | 0.7333 |
| 0.4336 | 0.08 | 100 | 0.2888 | 0.8667 |
| 0.3437 | 0.12 | 150 | 0.2895 | 0.8667 |
| 0.3375 | 0.16 | 200 | 0.2864 | 0.8733 |
| 0.3072 | 0.2 | 250 | 0.2577 | 0.8867 |
| 0.3019 | 0.24 | 300 | 0.2574 | 0.8933 |
| 0.2662 | 0.28 | 350 | 0.2621 | 0.8867 |
| 0.283 | 0.32 | 400 | 0.2340 | 0.92 |
| 0.2949 | 0.37 | 450 | 0.2482 | 0.8933 |
| 0.3066 | 0.41 | 500 | 0.2537 | 0.9 |
| 0.2457 | 0.45 | 550 | 0.2473 | 0.9 |
| 0.295 | 0.49 | 600 | 0.2177 | 0.9133 |
| 0.2862 | 0.53 | 650 | 0.2215 | 0.9133 |
| 0.2603 | 0.57 | 700 | 0.2272 | 0.9133 |
| 0.2976 | 0.61 | 750 | 0.2298 | 0.9067 |
| 0.2823 | 0.65 | 800 | 0.2451 | 0.8933 |
| 0.2583 | 0.69 | 850 | 0.2645 | 0.8933 |
| 0.2694 | 0.73 | 900 | 0.2352 | 0.9 |
| 0.2433 | 0.77 | 950 | 0.2322 | 0.9133 |
| 0.2598 | 0.81 | 1000 | 0.2300 | 0.9 |
| 0.2701 | 0.85 | 1050 | 0.2162 | 0.9 |
| 0.2227 | 0.89 | 1100 | 0.2135 | 0.8933 |
| 0.2045 | 0.93 | 1150 | 0.2233 | 0.9133 |
| 0.2821 | 0.97 | 1200 | 0.2194 | 0.9 |
| 0.2342 | 1.01 | 1250 | 0.2488 | 0.88 |
| 0.2028 | 1.06 | 1300 | 0.2451 | 0.8867 |
| 0.1509 | 1.1 | 1350 | 0.3174 | 0.88 |
| 0.1888 | 1.14 | 1400 | 0.2537 | 0.9133 |
| 0.1825 | 1.18 | 1450 | 0.2559 | 0.9067 |
| 0.1721 | 1.22 | 1500 | 0.2511 | 0.92 |
| 0.2137 | 1.26 | 1550 | 0.2963 | 0.9133 |
| 0.2153 | 1.3 | 1600 | 0.2210 | 0.92 |
| 0.1989 | 1.34 | 1650 | 0.2231 | 0.9133 |
| 0.2155 | 1.38 | 1700 | 0.1991 | 0.9133 |
| 0.1912 | 1.42 | 1750 | 0.2146 | 0.92 |
| 0.1623 | 1.46 | 1800 | 0.2721 | 0.9 |
| 0.2236 | 1.5 | 1850 | 0.2301 | 0.9267 |
| 0.1907 | 1.54 | 1900 | 0.1988 | 0.92 |
| 0.1286 | 1.58 | 1950 | 0.2326 | 0.9 |
| 0.2147 | 1.62 | 2000 | 0.2432 | 0.9267 |
| 0.2018 | 1.66 | 2050 | 0.2162 | 0.9067 |
| 0.2073 | 1.7 | 2100 | 0.2153 | 0.9133 |
| 0.1498 | 1.75 | 2150 | 0.2335 | 0.92 |
| 0.1812 | 1.79 | 2200 | 0.2275 | 0.9267 |
| 0.1482 | 1.83 | 2250 | 0.2734 | 0.9 |
| 0.2233 | 1.87 | 2300 | 0.2454 | 0.9 |
| 0.1673 | 1.91 | 2350 | 0.2394 | 0.92 |
| 0.1555 | 1.95 | 2400 | 0.2725 | 0.92 |
| 0.2082 | 1.99 | 2450 | 0.2684 | 0.9133 |
| 0.1545 | 2.03 | 2500 | 0.3049 | 0.9067 |
| 0.1384 | 2.07 | 2550 | 0.2960 | 0.9133 |
| 0.1201 | 2.11 | 2600 | 0.3259 | 0.9 |
| 0.1348 | 2.15 | 2650 | 0.3091 | 0.9133 |
| 0.1046 | 2.19 | 2700 | 0.2916 | 0.9267 |
| 0.1506 | 2.23 | 2750 | 0.2910 | 0.9133 |
| 0.1481 | 2.27 | 2800 | 0.2855 | 0.9067 |
| 0.1318 | 2.31 | 2850 | 0.3075 | 0.9 |
| 0.1204 | 2.35 | 2900 | 0.3169 | 0.8933 |
| 0.1669 | 2.39 | 2950 | 0.3050 | 0.9067 |
| 0.1725 | 2.44 | 3000 | 0.2970 | 0.9133 |
| 0.1305 | 2.48 | 3050 | 0.3065 | 0.9 |
| 0.1508 | 2.52 | 3100 | 0.3079 | 0.9133 |
| 0.184 | 2.56 | 3150 | 0.3482 | 0.9067 |
| 0.1263 | 2.6 | 3200 | 0.3310 | 0.9 |
| 0.1282 | 2.64 | 3250 | 0.3520 | 0.8933 |
| 0.1217 | 2.68 | 3300 | 0.3158 | 0.9067 |
| 0.1203 | 2.72 | 3350 | 0.3351 | 0.92 |
| 0.1068 | 2.76 | 3400 | 0.3239 | 0.92 |
| 0.1517 | 2.8 | 3450 | 0.3247 | 0.92 |
| 0.113 | 2.84 | 3500 | 0.3269 | 0.9133 |
| 0.1276 | 2.88 | 3550 | 0.3162 | 0.92 |
| 0.1548 | 2.92 | 3600 | 0.3196 | 0.9133 |
| 0.1305 | 2.96 | 3650 | 0.3163 | 0.92 |
| 0.149 | 3.0 | 3700 | 0.3013 | 0.92 |
| 0.0816 | 3.04 | 3750 | 0.3097 | 0.9267 |
| 0.0884 | 3.08 | 3800 | 0.3028 | 0.92 |
| 0.0727 | 3.12 | 3850 | 0.3487 | 0.9133 |
| 0.1018 | 3.17 | 3900 | 0.3447 | 0.92 |
| 0.1266 | 3.21 | 3950 | 0.3589 | 0.9133 |
| 0.1216 | 3.25 | 4000 | 0.3464 | 0.92 |
| 0.091 | 3.29 | 4050 | 0.3454 | 0.92 |
| 0.0829 | 3.33 | 4100 | 0.3450 | 0.92 |
| 0.1084 | 3.37 | 4150 | 0.3670 | 0.92 |
| 0.0754 | 3.41 | 4200 | 0.3661 | 0.92 |
| 0.094 | 3.45 | 4250 | 0.3588 | 0.9067 |
| 0.0641 | 3.49 | 4300 | 0.3936 | 0.92 |
| 0.1138 | 3.53 | 4350 | 0.3616 | 0.92 |
| 0.0744 | 3.57 | 4400 | 0.3562 | 0.92 |
| 0.0697 | 3.61 | 4450 | 0.3532 | 0.9267 |
| 0.1083 | 3.65 | 4500 | 0.3451 | 0.9267 |
| 0.0701 | 3.69 | 4550 | 0.3307 | 0.92 |
| 0.0849 | 3.73 | 4600 | 0.3797 | 0.92 |
| 0.09 | 3.77 | 4650 | 0.3746 | 0.9267 |
| 0.0799 | 3.81 | 4700 | 0.3799 | 0.92 |
| 0.0589 | 3.86 | 4750 | 0.3805 | 0.92 |
| 0.0578 | 3.9 | 4800 | 0.3910 | 0.9133 |
| 0.0816 | 3.94 | 4850 | 0.3856 | 0.9133 |
| 0.1366 | 3.98 | 4900 | 0.3707 | 0.92 |
| 0.0846 | 4.02 | 4950 | 0.3802 | 0.92 |
| 0.0401 | 4.06 | 5000 | 0.3842 | 0.92 |
| 0.0851 | 4.1 | 5050 | 0.3773 | 0.9267 |
| 0.0514 | 4.14 | 5100 | 0.3922 | 0.9133 |
| 0.0909 | 4.18 | 5150 | 0.3893 | 0.92 |
| 0.0764 | 4.22 | 5200 | 0.3818 | 0.9133 |
| 0.1208 | 4.26 | 5250 | 0.4096 | 0.92 |
| 0.0689 | 4.3 | 5300 | 0.3940 | 0.9133 |
| 0.0524 | 4.34 | 5350 | 0.4020 | 0.9133 |
| 0.0733 | 4.38 | 5400 | 0.4002 | 0.9133 |
| 0.0699 | 4.42 | 5450 | 0.4013 | 0.9133 |
| 0.0712 | 4.46 | 5500 | 0.4037 | 0.9067 |
| 0.0557 | 4.5 | 5550 | 0.4121 | 0.92 |
| 0.0679 | 4.55 | 5600 | 0.4067 | 0.9133 |
| 0.0651 | 4.59 | 5650 | 0.4194 | 0.9133 |
| 0.0607 | 4.63 | 5700 | 0.4007 | 0.9133 |
| 0.0676 | 4.67 | 5750 | 0.4013 | 0.9133 |
| 0.0303 | 4.71 | 5800 | 0.3984 | 0.9133 |
| 0.0674 | 4.75 | 5850 | 0.4037 | 0.9133 |
| 0.0842 | 4.79 | 5900 | 0.4072 | 0.9133 |
| 0.0516 | 4.83 | 5950 | 0.4096 | 0.9133 |
| 0.0556 | 4.87 | 6000 | 0.4111 | 0.92 |
| 0.0277 | 4.91 | 6050 | 0.4079 | 0.9133 |
| 0.0629 | 4.95 | 6100 | 0.4053 | 0.9133 |
| 0.0426 | 4.99 | 6150 | 0.4043 | 0.9133 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.3.2
- Tokenizers 0.13.2
|
michaelmayo704/sd-class-butterflies-32 | michaelmayo704 | 2022-11-28T21:00:24Z | 35 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-28T20:59:37Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(michaelmayo704/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
pig4431/YELP_roBERTa_5E | pig4431 | 2022-11-28T20:50:36Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T20:34:22Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: YELP_roBERTa_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: train
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.9866666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# YELP_roBERTa_5E
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0995
- Accuracy: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5721 | 0.03 | 50 | 0.3248 | 0.88 |
| 0.2836 | 0.06 | 100 | 0.1190 | 0.9733 |
| 0.1793 | 0.1 | 150 | 0.1707 | 0.96 |
| 0.2196 | 0.13 | 200 | 0.0841 | 0.9733 |
| 0.2102 | 0.16 | 250 | 0.0634 | 0.9867 |
| 0.2197 | 0.19 | 300 | 0.0763 | 0.98 |
| 0.1866 | 0.22 | 350 | 0.0640 | 0.9867 |
| 0.1717 | 0.26 | 400 | 0.0612 | 0.9867 |
| 0.1443 | 0.29 | 450 | 0.0844 | 0.9733 |
| 0.1669 | 0.32 | 500 | 0.1297 | 0.9667 |
| 0.2005 | 0.35 | 550 | 0.0644 | 0.9867 |
| 0.1543 | 0.38 | 600 | 0.0874 | 0.9867 |
| 0.1345 | 0.42 | 650 | 0.1853 | 0.96 |
| 0.1664 | 0.45 | 700 | 0.1157 | 0.9667 |
| 0.1876 | 0.48 | 750 | 0.0474 | 0.9733 |
| 0.111 | 0.51 | 800 | 0.0645 | 0.98 |
| 0.1511 | 0.54 | 850 | 0.0432 | 0.9933 |
| 0.1846 | 0.58 | 900 | 0.0505 | 0.9867 |
| 0.151 | 0.61 | 950 | 0.0452 | 0.98 |
| 0.1338 | 0.64 | 1000 | 0.1007 | 0.98 |
| 0.1175 | 0.67 | 1050 | 0.0747 | 0.9867 |
| 0.1818 | 0.7 | 1100 | 0.0852 | 0.98 |
| 0.1557 | 0.74 | 1150 | 0.0255 | 0.9933 |
| 0.1487 | 0.77 | 1200 | 0.1266 | 0.9733 |
| 0.1315 | 0.8 | 1250 | 0.0593 | 0.9867 |
| 0.1059 | 0.83 | 1300 | 0.0697 | 0.9867 |
| 0.108 | 0.86 | 1350 | 0.0459 | 0.9933 |
| 0.1525 | 0.9 | 1400 | 0.0446 | 0.9933 |
| 0.1185 | 0.93 | 1450 | 0.0528 | 0.9867 |
| 0.1611 | 0.96 | 1500 | 0.0582 | 0.9867 |
| 0.1556 | 0.99 | 1550 | 0.0726 | 0.98 |
| 0.0902 | 1.02 | 1600 | 0.0466 | 0.9867 |
| 0.1535 | 1.06 | 1650 | 0.0850 | 0.9733 |
| 0.0787 | 1.09 | 1700 | 0.0869 | 0.9867 |
| 0.1019 | 1.12 | 1750 | 0.0984 | 0.98 |
| 0.1234 | 1.15 | 1800 | 0.0358 | 0.9933 |
| 0.0884 | 1.18 | 1850 | 0.0621 | 0.9867 |
| 0.0785 | 1.22 | 1900 | 0.0507 | 0.9867 |
| 0.1454 | 1.25 | 1950 | 0.0793 | 0.98 |
| 0.1035 | 1.28 | 2000 | 0.0501 | 0.9867 |
| 0.0579 | 1.31 | 2050 | 0.0935 | 0.9867 |
| 0.1215 | 1.34 | 2100 | 0.0079 | 1.0 |
| 0.0958 | 1.38 | 2150 | 0.0673 | 0.9867 |
| 0.106 | 1.41 | 2200 | 0.0875 | 0.9867 |
| 0.095 | 1.44 | 2250 | 0.0745 | 0.9867 |
| 0.0958 | 1.47 | 2300 | 0.0715 | 0.9867 |
| 0.085 | 1.5 | 2350 | 0.0742 | 0.9867 |
| 0.082 | 1.54 | 2400 | 0.1053 | 0.9733 |
| 0.1202 | 1.57 | 2450 | 0.0711 | 0.9867 |
| 0.1041 | 1.6 | 2500 | 0.0723 | 0.9867 |
| 0.1145 | 1.63 | 2550 | 0.0361 | 0.9867 |
| 0.0909 | 1.66 | 2600 | 0.0868 | 0.9867 |
| 0.1029 | 1.7 | 2650 | 0.0680 | 0.9867 |
| 0.1083 | 1.73 | 2700 | 0.0599 | 0.9867 |
| 0.0871 | 1.76 | 2750 | 0.0452 | 0.9867 |
| 0.1506 | 1.79 | 2800 | 0.0344 | 0.9933 |
| 0.0778 | 1.82 | 2850 | 0.0380 | 0.9933 |
| 0.0982 | 1.86 | 2900 | 0.0349 | 0.9933 |
| 0.1296 | 1.89 | 2950 | 0.0713 | 0.9867 |
| 0.0836 | 1.92 | 3000 | 0.0693 | 0.9867 |
| 0.0699 | 1.95 | 3050 | 0.1023 | 0.98 |
| 0.0631 | 1.98 | 3100 | 0.0852 | 0.98 |
| 0.0724 | 2.02 | 3150 | 0.0835 | 0.9867 |
| 0.0898 | 2.05 | 3200 | 0.0872 | 0.9867 |
| 0.0642 | 2.08 | 3250 | 0.0427 | 0.9933 |
| 0.0524 | 2.11 | 3300 | 0.0731 | 0.9867 |
| 0.0415 | 2.14 | 3350 | 0.0632 | 0.9867 |
| 0.0604 | 2.18 | 3400 | 0.0428 | 0.9867 |
| 0.0701 | 2.21 | 3450 | 0.0671 | 0.9867 |
| 0.0668 | 2.24 | 3500 | 0.0360 | 0.9933 |
| 0.0442 | 2.27 | 3550 | 0.0454 | 0.9933 |
| 0.0677 | 2.3 | 3600 | 0.0517 | 0.9867 |
| 0.0965 | 2.34 | 3650 | 0.0659 | 0.98 |
| 0.0781 | 2.37 | 3700 | 0.0732 | 0.9867 |
| 0.0421 | 2.4 | 3750 | 0.0855 | 0.9867 |
| 0.0674 | 2.43 | 3800 | 0.0813 | 0.9867 |
| 0.0613 | 2.46 | 3850 | 0.0859 | 0.98 |
| 0.0679 | 2.5 | 3900 | 0.0721 | 0.9867 |
| 0.0417 | 2.53 | 3950 | 0.0977 | 0.9867 |
| 0.0616 | 2.56 | 4000 | 0.0789 | 0.9867 |
| 0.0678 | 2.59 | 4050 | 0.0804 | 0.9867 |
| 0.0651 | 2.62 | 4100 | 0.0994 | 0.98 |
| 0.0714 | 2.66 | 4150 | 0.0744 | 0.98 |
| 0.034 | 2.69 | 4200 | 0.0679 | 0.9867 |
| 0.0356 | 2.72 | 4250 | 0.0432 | 0.9933 |
| 0.0813 | 2.75 | 4300 | 0.0483 | 0.9933 |
| 0.052 | 2.78 | 4350 | 0.0689 | 0.9867 |
| 0.0611 | 2.82 | 4400 | 0.0474 | 0.9867 |
| 0.0615 | 2.85 | 4450 | 0.0557 | 0.9867 |
| 0.0569 | 2.88 | 4500 | 0.1056 | 0.98 |
| 0.0352 | 2.91 | 4550 | 0.0443 | 0.9933 |
| 0.0312 | 2.94 | 4600 | 0.1026 | 0.98 |
| 0.0662 | 2.98 | 4650 | 0.0677 | 0.9867 |
| 0.0694 | 3.01 | 4700 | 0.0368 | 0.9933 |
| 0.0144 | 3.04 | 4750 | 0.0647 | 0.9867 |
| 0.0378 | 3.07 | 4800 | 0.0893 | 0.9867 |
| 0.0393 | 3.1 | 4850 | 0.0841 | 0.9867 |
| 0.0598 | 3.13 | 4900 | 0.0594 | 0.9867 |
| 0.0329 | 3.17 | 4950 | 0.0933 | 0.9867 |
| 0.036 | 3.2 | 5000 | 0.0974 | 0.9867 |
| 0.0166 | 3.23 | 5050 | 0.0962 | 0.9867 |
| 0.0189 | 3.26 | 5100 | 0.0827 | 0.9867 |
| 0.0482 | 3.29 | 5150 | 0.0955 | 0.9867 |
| 0.0105 | 3.33 | 5200 | 0.0745 | 0.9867 |
| 0.0447 | 3.36 | 5250 | 0.1038 | 0.9867 |
| 0.0495 | 3.39 | 5300 | 0.0684 | 0.9867 |
| 0.0445 | 3.42 | 5350 | 0.0815 | 0.9867 |
| 0.0006 | 3.45 | 5400 | 0.1012 | 0.9867 |
| 0.0214 | 3.49 | 5450 | 0.0707 | 0.9867 |
| 0.0289 | 3.52 | 5500 | 0.1000 | 0.9867 |
| 0.0304 | 3.55 | 5550 | 0.1069 | 0.9867 |
| 0.0339 | 3.58 | 5600 | 0.1079 | 0.9867 |
| 0.0227 | 3.61 | 5650 | 0.1032 | 0.9867 |
| 0.0626 | 3.65 | 5700 | 0.0978 | 0.9867 |
| 0.04 | 3.68 | 5750 | 0.0965 | 0.9867 |
| 0.0358 | 3.71 | 5800 | 0.1048 | 0.9867 |
| 0.0287 | 3.74 | 5850 | 0.0921 | 0.9867 |
| 0.049 | 3.77 | 5900 | 0.1108 | 0.98 |
| 0.0497 | 3.81 | 5950 | 0.0795 | 0.9867 |
| 0.0047 | 3.84 | 6000 | 0.0979 | 0.9867 |
| 0.0252 | 3.87 | 6050 | 0.1071 | 0.9867 |
| 0.0691 | 3.9 | 6100 | 0.0821 | 0.9867 |
| 0.0419 | 3.93 | 6150 | 0.0896 | 0.9867 |
| 0.0197 | 3.97 | 6200 | 0.0943 | 0.9867 |
| 0.0281 | 4.0 | 6250 | 0.0901 | 0.9867 |
| 0.0118 | 4.03 | 6300 | 0.0950 | 0.9867 |
| 0.0057 | 4.06 | 6350 | 0.1031 | 0.9867 |
| 0.0335 | 4.09 | 6400 | 0.0896 | 0.9867 |
| 0.0095 | 4.13 | 6450 | 0.0966 | 0.9867 |
| 0.05 | 4.16 | 6500 | 0.0977 | 0.9867 |
| 0.0142 | 4.19 | 6550 | 0.1174 | 0.98 |
| 0.018 | 4.22 | 6600 | 0.0963 | 0.9867 |
| 0.0274 | 4.25 | 6650 | 0.0953 | 0.9867 |
| 0.0199 | 4.29 | 6700 | 0.0968 | 0.9867 |
| 0.0171 | 4.32 | 6750 | 0.0963 | 0.9867 |
| 0.0195 | 4.35 | 6800 | 0.0916 | 0.9867 |
| 0.0091 | 4.38 | 6850 | 0.0954 | 0.9867 |
| 0.0115 | 4.41 | 6900 | 0.0974 | 0.9867 |
| 0.0299 | 4.45 | 6950 | 0.0971 | 0.9867 |
| 0.0338 | 4.48 | 7000 | 0.0922 | 0.9867 |
| 0.0107 | 4.51 | 7050 | 0.0964 | 0.9867 |
| 0.0063 | 4.54 | 7100 | 0.0921 | 0.9867 |
| 0.0099 | 4.57 | 7150 | 0.0923 | 0.9867 |
| 0.0101 | 4.61 | 7200 | 0.0971 | 0.9867 |
| 0.0262 | 4.64 | 7250 | 0.1008 | 0.9867 |
| 0.0097 | 4.67 | 7300 | 0.0999 | 0.9867 |
| 0.0302 | 4.7 | 7350 | 0.0980 | 0.9867 |
| 0.0225 | 4.73 | 7400 | 0.0976 | 0.9867 |
| 0.0235 | 4.77 | 7450 | 0.1016 | 0.9867 |
| 0.0106 | 4.8 | 7500 | 0.1034 | 0.9867 |
| 0.0495 | 4.83 | 7550 | 0.1135 | 0.98 |
| 0.0228 | 4.86 | 7600 | 0.1034 | 0.9867 |
| 0.0229 | 4.89 | 7650 | 0.0990 | 0.9867 |
| 0.0206 | 4.93 | 7700 | 0.0993 | 0.9867 |
| 0.0188 | 4.96 | 7750 | 0.0993 | 0.9867 |
| 0.0189 | 4.99 | 7800 | 0.0995 | 0.9867 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.3.2
- Tokenizers 0.13.2
|
SwePalm/sd-class-butterflies-64 | SwePalm | 2022-11-28T20:42:14Z | 34 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-28T20:41:56Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(SwePalm/sd-class-butterflies-64)
image = pipeline().images[0]
image
```
|
pig4431/TUF_XLNET_5E | pig4431 | 2022-11-28T20:40:06Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T20:20:56Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: TUF_XLNET_5E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TUF_XLNET_5E
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2725
- Accuracy: 0.9533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4817 | 0.1 | 50 | 0.2602 | 0.8733 |
| 0.2405 | 0.2 | 100 | 0.5818 | 0.88 |
| 0.2172 | 0.3 | 150 | 0.1851 | 0.9533 |
| 0.2697 | 0.4 | 200 | 0.1692 | 0.9267 |
| 0.2313 | 0.5 | 250 | 0.1086 | 0.9467 |
| 0.2245 | 0.59 | 300 | 0.2031 | 0.9267 |
| 0.1805 | 0.69 | 350 | 0.1414 | 0.9467 |
| 0.1896 | 0.79 | 400 | 0.0824 | 0.9733 |
| 0.1969 | 0.89 | 450 | 0.1499 | 0.9533 |
| 0.1745 | 0.99 | 500 | 0.1827 | 0.9267 |
| 0.1143 | 1.09 | 550 | 0.1923 | 0.9533 |
| 0.1478 | 1.19 | 600 | 0.1718 | 0.94 |
| 0.1368 | 1.29 | 650 | 0.1170 | 0.9733 |
| 0.1288 | 1.39 | 700 | 0.1418 | 0.9667 |
| 0.1689 | 1.49 | 750 | 0.1173 | 0.9733 |
| 0.1078 | 1.58 | 800 | 0.2784 | 0.9333 |
| 0.1343 | 1.68 | 850 | 0.1555 | 0.9533 |
| 0.1104 | 1.78 | 900 | 0.1361 | 0.9533 |
| 0.1267 | 1.88 | 950 | 0.1936 | 0.9267 |
| 0.0928 | 1.98 | 1000 | 0.3070 | 0.94 |
| 0.0949 | 2.08 | 1050 | 0.1905 | 0.94 |
| 0.0329 | 2.18 | 1100 | 0.2296 | 0.9533 |
| 0.0406 | 2.28 | 1150 | 0.3202 | 0.94 |
| 0.0983 | 2.38 | 1200 | 0.4515 | 0.9267 |
| 0.0533 | 2.48 | 1250 | 0.2152 | 0.9533 |
| 0.0878 | 2.57 | 1300 | 0.1573 | 0.9533 |
| 0.0595 | 2.67 | 1350 | 0.1699 | 0.96 |
| 0.0937 | 2.77 | 1400 | 0.2825 | 0.9333 |
| 0.0817 | 2.87 | 1450 | 0.2325 | 0.9467 |
| 0.0845 | 2.97 | 1500 | 0.1918 | 0.9533 |
| 0.0711 | 3.07 | 1550 | 0.3186 | 0.94 |
| 0.033 | 3.17 | 1600 | 0.2571 | 0.94 |
| 0.0134 | 3.27 | 1650 | 0.2733 | 0.94 |
| 0.0546 | 3.37 | 1700 | 0.1934 | 0.9533 |
| 0.0277 | 3.47 | 1750 | 0.2731 | 0.94 |
| 0.0081 | 3.56 | 1800 | 0.2531 | 0.9467 |
| 0.0387 | 3.66 | 1850 | 0.2121 | 0.96 |
| 0.0014 | 3.76 | 1900 | 0.2601 | 0.96 |
| 0.0379 | 3.86 | 1950 | 0.2501 | 0.9467 |
| 0.0271 | 3.96 | 2000 | 0.2899 | 0.94 |
| 0.0182 | 4.06 | 2050 | 0.2197 | 0.9533 |
| 0.0263 | 4.16 | 2100 | 0.2374 | 0.9533 |
| 0.0079 | 4.26 | 2150 | 0.3192 | 0.94 |
| 0.0239 | 4.36 | 2200 | 0.3755 | 0.9333 |
| 0.02 | 4.46 | 2250 | 0.2702 | 0.9467 |
| 0.0072 | 4.55 | 2300 | 0.2055 | 0.9533 |
| 0.0124 | 4.65 | 2350 | 0.2299 | 0.9533 |
| 0.0072 | 4.75 | 2400 | 0.2813 | 0.9533 |
| 0.0125 | 4.85 | 2450 | 0.2696 | 0.9533 |
| 0.0205 | 4.95 | 2500 | 0.2725 | 0.9533 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
rmartinshort/sd-class-butterflies-64 | rmartinshort | 2022-11-28T20:32:13Z | 36 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-28T20:31:54Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(rmartinshort/sd-class-butterflies-64)
image = pipeline().images[0]
image
```
|
CyantifiCQ/noisy_butterflied_diffusion | CyantifiCQ | 2022-11-28T20:23:45Z | 35 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-28T20:22:34Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(CyantifiCQ/noisy_butterflied_diffusion)
image = pipeline().images[0]
image
```
|
futuredatascience/from-classifier-v1 | futuredatascience | 2022-11-28T20:07:27Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-11-28T20:07:15Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 53 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 530,
"warmup_steps": 53,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
SwePalm/sd-class-butterflies-32 | SwePalm | 2022-11-28T20:01:43Z | 32 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-28T20:00:51Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of (not so?) cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(SwePalm/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
reubenjohn/stack-overflow-open-status-classifier-pt | reubenjohn | 2022-11-28T20:01:21Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-16T03:44:14Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: stack-overflow-open-status-classifier-pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stack-overflow-open-status-classifier-pt
This model is a fine-tuned version of [reubenjohn/stack-overflow-open-status-classifier-pt](https://huggingface.co/reubenjohn/stack-overflow-open-status-classifier-pt) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.9448
- eval_runtime: 3.554
- eval_samples_per_second: 28.137
- eval_steps_per_second: 0.563
- epoch: 0.01
- step: 60
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 1
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
UKP-SQuARE/tweac_16 | UKP-SQuARE | 2022-11-28T19:43:48Z | 102 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"QA",
"en",
"dataset:BoolQ",
"dataset:CommonSenseQA",
"dataset:DROP",
"dataset:DuoRC",
"dataset:HellaSWAG",
"dataset:HotpotQA",
"dataset:HybridQA",
"dataset:NarrativeQA",
"dataset:NaturalQuestionsShort",
"dataset:NewsQA",
"dataset:QAMR",
"dataset:RACE",
"dataset:SearchQA",
"dataset:SIQA",
"dataset:SQuAD",
"dataset:TriviaQA-web",
"arxiv:2104.07081",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-09T18:34:07Z | ---
language:
- en
tags:
- QA
license: cc-by-4.0
datasets:
- BoolQ
- CommonSenseQA
- DROP
- DuoRC
- HellaSWAG
- HotpotQA
- HybridQA
- NarrativeQA
- NaturalQuestionsShort
- NewsQA
- QAMR
- RACE
- SearchQA
- SIQA
- SQuAD
- TriviaQA-web
metrics:
- Accuracy
- Precision
- Recall
- F1
- MRR
- R@3
- R@5
---
BERT for Sequence Classification trained on QA Dataset prediction task.
- Input: question.
- Output: dataset from where that question comes from.
Original paper: TWEAC: Transformer with Extendable QA Agent Classifiers
https://arxiv.org/abs/2104.07081
Datasets used for training:
```
list_datasets = ['BoolQ','CommonSenseQA','DROP','DuoRC','HellaSWAG','HotpotQA','HybridQA','NarrativeQA','NaturalQuestionsShort','NewsQA','QAMR','RACE','SearchQA','SIQA','SQuAD','TriviaQA-web']
```
Results for all datasets:
- Accuracy: 0.7919096825783123
- Precision: 0.731586272892176
- Recall: 0.7919096825783123
- F1: 0.7494425609552463
- MRR: 0.8720871733637521
- R@3: 0.9438690810655046
- R@5: 0.9745318608004427
- Queries/second: 6052.33538824659
Results per dataset:
```
"BoolQ": {
"accuracy": 0.998776758409786,
"mrr": 0.999388379204893,
"r@3": 1.0,
"r@5": 1.0,
"query_per_second": 6978.947907596168,
"precision": 0.8649364406779662,
"recall": 0.998776758409786,
"f1": 0.9270508089696281
},
"CommonSenseQA": {
"accuracy": 0.9247135842880524,
"mrr": 0.9476358338878795,
"r@3": 0.9705400981996727,
"r@5": 0.9705400981996727,
"query_per_second": 5823.984138936813,
"precision": 0.442443226311668,
"recall": 0.9247135842880524,
"f1": 0.5985169491525425
},
"DROP": {
"accuracy": 0.9075083892617449,
"mrr": 0.9378200367399193,
"r@3": 0.9609899328859061,
"r@5": 0.9786073825503355,
"query_per_second": 6440.988897129248,
"precision": 0.8636726546906187,
"recall": 0.9075083892617449,
"f1": 0.8850480670893842
},
"DuoRC": {
"accuracy": 0.5555803405457654,
"mrr": 0.7368963429107307,
"r@3": 0.9092125808610305,
"r@5": 0.9596996059186557,
"query_per_second": 6853.643198794893,
"precision": 0.646814404432133,
"recall": 0.5555803405457654,
"f1": 0.5977360905563778
},
"HellaSWAG": {
"accuracy": 0.998406691894045,
"mrr": 0.9990705702715262,
"r@3": 1.0,
"r@5": 1.0,
"query_per_second": 3091.5012960785157,
"precision": 0.9974134500596896,
"recall": 0.998406691894045,
"f1": 0.9979098238280083
},
"HotpotQA": {
"accuracy": 0.7414435784479837,
"mrr": 0.8435804344945315,
"r@3": 0.9325652321247034,
"r@5": 0.973568281938326,
"query_per_second": 4972.668019223381,
"precision": 0.7352150537634409,
"recall": 0.7414435784479837,
"f1": 0.7383161801923401
},
"HybridQA": {
"accuracy": 0.7934218118869013,
"mrr": 0.8806947764680021,
"r@3": 0.964800923254472,
"r@5": 0.9930755914598961,
"query_per_second": 4886.494046259562,
"precision": 0.7198952879581152,
"recall": 0.7934218118869013,
"f1": 0.7548723579467472
},
"NarrativeQA": {
"accuracy": 0.5623756749076442,
"mrr": 0.7416681781060867,
"r@3": 0.9011082693947144,
"r@5": 0.9580373212086767,
"query_per_second": 7081.067049796865,
"precision": 0.5623224095472628,
"recall": 0.5623756749076442,
"f1": 0.5623490409661377
},
"NaturalQuestionsShort": {
"accuracy": 0.7985353692739171,
"mrr": 0.8743599435345307,
"r@3": 0.9439077594266126,
"r@5": 0.9774072919912745,
"query_per_second": 7136.590426649795,
"precision": 0.7963020509633313,
"recall": 0.7985353692739171,
"f1": 0.7974171464135678
},
"NewsQA": {
"accuracy": 0.5375118708452041,
"mrr": 0.71192075967717,
"r@3": 0.855650522317189,
"r@5": 0.939696106362773,
"query_per_second": 7193.851409052092,
"precision": 0.18757249378624688,
"recall": 0.5375118708452041,
"f1": 0.2780985136961061
},
"QAMR": {
"accuracy": 0.6658497602557272,
"mrr": 0.7969741223377345,
"r@3": 0.9207778369738945,
"r@5": 0.973361747469366,
"query_per_second": 7321.775044800525,
"precision": 0.8654525309881587,
"recall": 0.6658497602557272,
"f1": 0.7526421968624852
},
"RACE": {
"accuracy": 0.8771538617474154,
"mrr": 0.917901778042666,
"r@3": 0.9489154672613015,
"r@5": 0.9693898236367322,
"query_per_second": 6952.225120744351,
"precision": 0.8767983789260385,
"recall": 0.8771538617474154,
"f1": 0.8769760843129306
},
"SearchQA": {
"accuracy": 0.9762073027090695,
"mrr": 0.9865069592101393,
"r@3": 0.9972909305064782,
"r@5": 0.9984687868080094,
"query_per_second": 4031.0193826035634,
"precision": 0.9870191735143503,
"recall": 0.9762073027090695,
"f1": 0.9815834665719192
},
"SIQA": {
"accuracy": 0.9969293756397134,
"mrr": 0.9977823268509042,
"r@3": 0.9979529170931423,
"r@5": 1.0,
"query_per_second": 6711.547709005977,
"precision": 0.9329501915708812,
"recall": 0.9969293756397134,
"f1": 0.9638792676892627
},
"SQuAD": {
"accuracy": 0.550628092881614,
"mrr": 0.7164538452390565,
"r@3": 0.8660068519223448,
"r@5": 0.9366197183098591,
"query_per_second": 7033.420124363291,
"precision": 0.48613678373382624,
"recall": 0.550628092881614,
"f1": 0.5163766175814368
},
"TriviaQA-web": {
"accuracy": 0.7855124582584125,
"mrr": 0.8647404868442627,
"r@3": 0.9321859748266119,
"r@5": 0.9640380169535063,
"query_per_second": 4327.642440910395,
"precision": 0.7404358353510896,
"recall": 0.7855124582584125,
"f1": 0.7623083634550667
},
``` |
essayproj/syntax | essayproj | 2022-11-28T19:15:52Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T18:59:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: syntax
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# syntax
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1395
- Accuracy: 0.6111
- F1: 0.4596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
huggingtweets/ttunguz | huggingtweets | 2022-11-28T19:09:57Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-24T01:44:13Z | ---
language: en
thumbnail: http://www.huggingtweets.com/ttunguz/1669662593098/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/901542400559992832/yDp0b2Al_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tomasz Tunguz</div>
<div style="text-align: center; font-size: 14px;">@ttunguz</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tomasz Tunguz.
| Data | Tomasz Tunguz |
| --- | --- |
| Tweets downloaded | 3239 |
| Retweets | 590 |
| Short tweets | 50 |
| Tweets kept | 2599 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/vxbo3iui/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ttunguz's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/190cyogq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/190cyogq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ttunguz')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
essayproj/roberta-base-essay | essayproj | 2022-11-28T19:08:54Z | 59 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"feature-extraction",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-11-28T19:08:03Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: roberta-base-essay
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# roberta-base-essay
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Tokenizers 0.13.2
|
leonrafael29/bert2bert_uncased_english_to_spanish | leonrafael29 | 2022-11-28T18:52:56Z | 13 | 0 | transformers | [
"transformers",
"encoder-decoder",
"text2text-generation",
"translation",
"en",
"es",
"dataset:news_commentary",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-11-28T17:32:46Z | ---
language:
- en
- es
tags:
- translation
datasets:
- news_commentary
metrics:
- bleurt
--- |
FrancoisDongier/sd-class-butterflies-32 | FrancoisDongier | 2022-11-28T18:19:31Z | 34 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-28T18:16:21Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(FrancoisDongier/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
ashu1318/lilt-en-funsd | ashu1318 | 2022-11-28T18:17:59Z | 80 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"lilt",
"token-classification",
"generated_from_trainer",
"dataset:funsd-layoutlmv3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-11-28T17:49:59Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- funsd-layoutlmv3
model-index:
- name: lilt-en-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lilt-en-funsd
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8731
- Answer: {'precision': 0.8688915375446961, 'recall': 0.8922888616891065, 'f1': 0.8804347826086957, 'number': 817}
- Header: {'precision': 0.638095238095238, 'recall': 0.5630252100840336, 'f1': 0.5982142857142857, 'number': 119}
- Question: {'precision': 0.9105166051660517, 'recall': 0.9164345403899722, 'f1': 0.9134659879685332, 'number': 1077}
- Overall Precision: 0.8792
- Overall Recall: 0.8857
- Overall F1: 0.8825
- Overall Accuracy: 0.7976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.4323 | 10.53 | 200 | 1.0423 | {'precision': 0.8369195922989807, 'recall': 0.9045287637698899, 'f1': 0.8694117647058823, 'number': 817} | {'precision': 0.5405405405405406, 'recall': 0.5042016806722689, 'f1': 0.5217391304347826, 'number': 119} | {'precision': 0.8869323447636701, 'recall': 0.8885793871866295, 'f1': 0.8877551020408162, 'number': 1077} | 0.8471 | 0.8723 | 0.8595 | 0.7981 |
| 0.045 | 21.05 | 400 | 1.2757 | {'precision': 0.8435374149659864, 'recall': 0.9106487148102815, 'f1': 0.8758092995879929, 'number': 817} | {'precision': 0.5795454545454546, 'recall': 0.42857142857142855, 'f1': 0.49275362318840576, 'number': 119} | {'precision': 0.8626943005181347, 'recall': 0.9275766016713092, 'f1': 0.8939597315436242, 'number': 1077} | 0.8430 | 0.8912 | 0.8665 | 0.8026 |
| 0.0133 | 31.58 | 600 | 1.4887 | {'precision': 0.8632075471698113, 'recall': 0.8959608323133414, 'f1': 0.8792792792792793, 'number': 817} | {'precision': 0.6020408163265306, 'recall': 0.4957983193277311, 'f1': 0.543778801843318, 'number': 119} | {'precision': 0.8791887125220459, 'recall': 0.9257195914577531, 'f1': 0.9018543645409318, 'number': 1077} | 0.8596 | 0.8882 | 0.8737 | 0.7983 |
| 0.0051 | 42.11 | 800 | 1.7382 | {'precision': 0.8601645123384254, 'recall': 0.8959608323133414, 'f1': 0.8776978417266187, 'number': 817} | {'precision': 0.5636363636363636, 'recall': 0.5210084033613446, 'f1': 0.5414847161572053, 'number': 119} | {'precision': 0.9032558139534884, 'recall': 0.9015784586815228, 'f1': 0.9024163568773235, 'number': 1077} | 0.8669 | 0.8768 | 0.8718 | 0.7925 |
| 0.004 | 52.63 | 1000 | 1.7599 | {'precision': 0.8307349665924276, 'recall': 0.9130966952264382, 'f1': 0.8699708454810495, 'number': 817} | {'precision': 0.6039603960396039, 'recall': 0.5126050420168067, 'f1': 0.5545454545454545, 'number': 119} | {'precision': 0.8939256572982774, 'recall': 0.9155060352831941, 'f1': 0.9045871559633027, 'number': 1077} | 0.8530 | 0.8907 | 0.8714 | 0.7941 |
| 0.002 | 63.16 | 1200 | 1.8409 | {'precision': 0.8312985571587126, 'recall': 0.9167686658506732, 'f1': 0.8719441210710128, 'number': 817} | {'precision': 0.6074766355140186, 'recall': 0.5462184873949579, 'f1': 0.575221238938053, 'number': 119} | {'precision': 0.8814949863263446, 'recall': 0.8978644382544104, 'f1': 0.8896044158233671, 'number': 1077} | 0.8461 | 0.8847 | 0.8650 | 0.7876 |
| 0.0013 | 73.68 | 1400 | 1.7795 | {'precision': 0.81445523193096, 'recall': 0.9241126070991432, 'f1': 0.8658256880733943, 'number': 817} | {'precision': 0.6237623762376238, 'recall': 0.5294117647058824, 'f1': 0.5727272727272728, 'number': 119} | {'precision': 0.888785046728972, 'recall': 0.883008356545961, 'f1': 0.8858872845831393, 'number': 1077} | 0.8432 | 0.8788 | 0.8606 | 0.7934 |
| 0.0011 | 84.21 | 1600 | 1.8386 | {'precision': 0.8338833883388339, 'recall': 0.9277845777233782, 'f1': 0.8783314020857474, 'number': 817} | {'precision': 0.6597938144329897, 'recall': 0.5378151260504201, 'f1': 0.5925925925925926, 'number': 119} | {'precision': 0.8943985307621671, 'recall': 0.904363974001857, 'f1': 0.8993536472760849, 'number': 1077} | 0.8573 | 0.8922 | 0.8744 | 0.7945 |
| 0.0048 | 94.74 | 1800 | 1.8664 | {'precision': 0.8589595375722543, 'recall': 0.9094247246022031, 'f1': 0.8834720570749108, 'number': 817} | {'precision': 0.6504854368932039, 'recall': 0.5630252100840336, 'f1': 0.6036036036036037, 'number': 119} | {'precision': 0.9003656307129799, 'recall': 0.914577530176416, 'f1': 0.9074159373560571, 'number': 1077} | 0.8705 | 0.8917 | 0.8810 | 0.7927 |
| 0.0004 | 105.26 | 2000 | 1.8672 | {'precision': 0.8634772462077013, 'recall': 0.9057527539779682, 'f1': 0.8841099163679809, 'number': 817} | {'precision': 0.7093023255813954, 'recall': 0.5126050420168067, 'f1': 0.5951219512195123, 'number': 119} | {'precision': 0.8923076923076924, 'recall': 0.9155060352831941, 'f1': 0.9037580201649862, 'number': 1077} | 0.8726 | 0.8877 | 0.8801 | 0.7953 |
| 0.0005 | 115.79 | 2200 | 1.8731 | {'precision': 0.8688915375446961, 'recall': 0.8922888616891065, 'f1': 0.8804347826086957, 'number': 817} | {'precision': 0.638095238095238, 'recall': 0.5630252100840336, 'f1': 0.5982142857142857, 'number': 119} | {'precision': 0.9105166051660517, 'recall': 0.9164345403899722, 'f1': 0.9134659879685332, 'number': 1077} | 0.8792 | 0.8857 | 0.8825 | 0.7976 |
| 0.0002 | 126.32 | 2400 | 1.9408 | {'precision': 0.8408071748878924, 'recall': 0.9179926560587516, 'f1': 0.8777062609713283, 'number': 817} | {'precision': 0.6310679611650486, 'recall': 0.5462184873949579, 'f1': 0.5855855855855856, 'number': 119} | {'precision': 0.9091760299625468, 'recall': 0.9015784586815228, 'f1': 0.9053613053613054, 'number': 1077} | 0.8657 | 0.8872 | 0.8763 | 0.7935 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
kejian/final-filter-again | kejian | 2022-11-28T17:39:16Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:kejian/codeparrot-train-more-filter-3.3b-cleaned",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-11-28T01:33:32Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- kejian/codeparrot-train-more-filter-3.3b-cleaned
model-index:
- name: kejian/final-filter-again
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kejian/final-filter-again
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'filter_threshold': 0.002361,
'is_split_by_sentences': True},
'generation': {'batch_size': 64,
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 640,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 512},
{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_samples': 512,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'codeparrot/codeparrot-small'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'kejian/final-filter-again',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0008,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 5000,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/25z4zfy3 |
akmmsr/mt5-small-finetuned-amazon-en-es_akmmsr | akmmsr | 2022-11-28T17:15:10Z | 61 | 0 | transformers | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-28T16:23:28Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: akmmsr/mt5-small-finetuned-amazon-en-es_akmmsr
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# akmmsr/mt5-small-finetuned-amazon-en-es_akmmsr
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.0336
- Validation Loss: 3.3393
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 9.6397 | 4.2364 | 0 |
| 5.8621 | 3.7162 | 1 |
| 5.0948 | 3.5552 | 2 |
| 4.6724 | 3.4873 | 3 |
| 4.4007 | 3.4245 | 4 |
| 4.2162 | 3.3792 | 5 |
| 4.0985 | 3.3499 | 6 |
| 4.0336 | 3.3393 | 7 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
wa3dbk/whisper-small-ar | wa3dbk | 2022-11-28T17:11:32Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-25T18:33:06Z |
## whisper-small-ar
This model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset (language=Arabic).
|
antgrutta/sd-class-butterflies-32 | antgrutta | 2022-11-28T16:59:10Z | 32 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-28T16:58:32Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(antgrutta/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
EmnaBou/bert-finetuned-DT | EmnaBou | 2022-11-28T16:49:12Z | 123 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-11-28T15:20:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-DT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-DT
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6697
- Precision: 0.2381
- Recall: 0.0321
- F1: 0.0565
- Accuracy: 0.8179
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 99 | 0.7505 | 0.0 | 0.0 | 0.0 | 0.8196 |
| No log | 2.0 | 198 | 0.7033 | 0.0 | 0.0 | 0.0 | 0.8196 |
| No log | 3.0 | 297 | 0.6697 | 0.2381 | 0.0321 | 0.0565 | 0.8179 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
luisgasco/distilbert-base-uncased-finetuned-emotion | luisgasco | 2022-11-28T16:17:49Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T16:03:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.892
- name: F1
type: f1
value: 0.8873822002431591
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3693
- Accuracy: 0.892
- F1: 0.8874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.5715 | 0.8275 | 0.8047 |
| 0.7552 | 2.0 | 250 | 0.3693 | 0.892 | 0.8874 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
tomekkorbak/awesome_ride | tomekkorbak | 2022-11-28T16:12:40Z | 0 | 0 | null | [
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | 2022-11-28T16:12:19Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: awesome_ride
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# awesome_ride
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'filter_threshold': 0.00065,
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'awesome_ride',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/3m98rnwq |
alexziweiwang/pure-start-epoch2 | alexziweiwang | 2022-11-28T16:08:48Z | 31 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2022-11-28T15:52:06Z | ---
tags:
- generated_from_trainer
model-index:
- name: pure-start-epoch2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pure-start-epoch2
This model is a fine-tuned version of [alexziweiwang/pure-start-epoch1](https://huggingface.co/alexziweiwang/pure-start-epoch1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.7447
- Acc: 0.24
- Wer: 1.0
- Correct: 48
- Total: 200
- Strlen: 200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | Wer | Correct | Total | Strlen |
|:-------------:|:-----:|:----:|:---------------:|:-----:|:---:|:-------:|:-----:|:------:|
| No log | 0.01 | 2 | 20.4002 | 0.095 | 1.0 | 19 | 200 | 200 |
| No log | 0.02 | 4 | 19.9080 | 0.095 | 1.0 | 19 | 200 | 200 |
| No log | 0.03 | 6 | 19.4711 | 0.095 | 1.0 | 19 | 200 | 200 |
| No log | 0.03 | 8 | 19.1535 | 0.095 | 1.0 | 19 | 200 | 200 |
| 46.6007 | 0.04 | 10 | 18.6684 | 0.095 | 1.0 | 19 | 200 | 200 |
| 46.6007 | 0.05 | 12 | 18.1640 | 0.095 | 1.0 | 19 | 200 | 200 |
| 46.6007 | 0.06 | 14 | 17.6937 | 0.095 | 1.0 | 19 | 200 | 200 |
| 46.6007 | 0.07 | 16 | 17.2710 | 0.095 | 1.0 | 19 | 200 | 200 |
| 46.6007 | 0.08 | 18 | 16.8469 | 0.095 | 1.0 | 19 | 200 | 200 |
| 49.1547 | 0.08 | 20 | 16.4418 | 0.095 | 1.0 | 19 | 200 | 200 |
| 49.1547 | 0.09 | 22 | 16.0409 | 0.095 | 1.0 | 19 | 200 | 200 |
| 49.1547 | 0.1 | 24 | 15.6677 | 0.095 | 1.0 | 19 | 200 | 200 |
| 49.1547 | 0.11 | 26 | 15.3291 | 0.095 | 1.0 | 19 | 200 | 200 |
| 49.1547 | 0.12 | 28 | 15.0097 | 0.095 | 1.0 | 19 | 200 | 200 |
| 35.1416 | 0.13 | 30 | 14.6776 | 0.095 | 1.0 | 19 | 200 | 200 |
| 35.1416 | 0.13 | 32 | 14.3788 | 0.095 | 1.0 | 19 | 200 | 200 |
| 35.1416 | 0.14 | 34 | 14.0924 | 0.095 | 1.0 | 19 | 200 | 200 |
| 35.1416 | 0.15 | 36 | 13.8133 | 0.095 | 1.0 | 19 | 200 | 200 |
| 35.1416 | 0.16 | 38 | 13.5539 | 0.095 | 1.0 | 19 | 200 | 200 |
| 34.4057 | 0.17 | 40 | 13.3095 | 0.095 | 1.0 | 19 | 200 | 200 |
| 34.4057 | 0.18 | 42 | 13.0804 | 0.095 | 1.0 | 19 | 200 | 200 |
| 34.4057 | 0.19 | 44 | 12.8580 | 0.105 | 1.0 | 21 | 200 | 200 |
| 34.4057 | 0.19 | 46 | 12.6532 | 0.115 | 1.0 | 23 | 200 | 200 |
| 34.4057 | 0.2 | 48 | 12.4532 | 0.13 | 1.0 | 26 | 200 | 200 |
| 33.2759 | 0.21 | 50 | 12.2452 | 0.14 | 1.0 | 28 | 200 | 200 |
| 33.2759 | 0.22 | 52 | 12.0666 | 0.13 | 1.0 | 26 | 200 | 200 |
| 33.2759 | 0.23 | 54 | 11.8976 | 0.165 | 1.0 | 33 | 200 | 200 |
| 33.2759 | 0.24 | 56 | 11.7373 | 0.175 | 1.0 | 35 | 200 | 200 |
| 33.2759 | 0.24 | 58 | 11.5933 | 0.17 | 1.0 | 34 | 200 | 200 |
| 29.8129 | 0.25 | 60 | 11.4281 | 0.15 | 1.0 | 30 | 200 | 200 |
| 29.8129 | 0.26 | 62 | 11.2665 | 0.14 | 1.0 | 28 | 200 | 200 |
| 29.8129 | 0.27 | 64 | 11.1158 | 0.145 | 1.0 | 29 | 200 | 200 |
| 29.8129 | 0.28 | 66 | 10.9840 | 0.135 | 1.0 | 27 | 200 | 200 |
| 29.8129 | 0.29 | 68 | 10.8502 | 0.15 | 1.0 | 30 | 200 | 200 |
| 38.792 | 0.3 | 70 | 10.7341 | 0.15 | 1.0 | 30 | 200 | 200 |
| 38.792 | 0.3 | 72 | 10.6082 | 0.165 | 1.0 | 33 | 200 | 200 |
| 38.792 | 0.31 | 74 | 10.4944 | 0.18 | 1.0 | 36 | 200 | 200 |
| 38.792 | 0.32 | 76 | 10.3818 | 0.21 | 1.0 | 42 | 200 | 200 |
| 38.792 | 0.33 | 78 | 10.2719 | 0.235 | 1.0 | 47 | 200 | 200 |
| 28.0092 | 0.34 | 80 | 10.1636 | 0.235 | 1.0 | 47 | 200 | 200 |
| 28.0092 | 0.35 | 82 | 10.0709 | 0.24 | 1.0 | 48 | 200 | 200 |
| 28.0092 | 0.35 | 84 | 9.9797 | 0.24 | 1.0 | 48 | 200 | 200 |
| 28.0092 | 0.36 | 86 | 9.8958 | 0.24 | 1.0 | 48 | 200 | 200 |
| 28.0092 | 0.37 | 88 | 9.7977 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.6175 | 0.38 | 90 | 9.7015 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.6175 | 0.39 | 92 | 9.6150 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.6175 | 0.4 | 94 | 9.5304 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.6175 | 0.4 | 96 | 9.4521 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.6175 | 0.41 | 98 | 9.3832 | 0.24 | 1.0 | 48 | 200 | 200 |
| 26.3434 | 0.42 | 100 | 9.3148 | 0.24 | 1.0 | 48 | 200 | 200 |
| 26.3434 | 0.43 | 102 | 9.2563 | 0.24 | 1.0 | 48 | 200 | 200 |
| 26.3434 | 0.44 | 104 | 9.1944 | 0.24 | 1.0 | 48 | 200 | 200 |
| 26.3434 | 0.45 | 106 | 9.1323 | 0.24 | 1.0 | 48 | 200 | 200 |
| 26.3434 | 0.46 | 108 | 9.0717 | 0.24 | 1.0 | 48 | 200 | 200 |
| 24.4387 | 0.46 | 110 | 9.0245 | 0.24 | 1.0 | 48 | 200 | 200 |
| 24.4387 | 0.47 | 112 | 8.9772 | 0.24 | 1.0 | 48 | 200 | 200 |
| 24.4387 | 0.48 | 114 | 8.9390 | 0.24 | 1.0 | 48 | 200 | 200 |
| 24.4387 | 0.49 | 116 | 8.9013 | 0.24 | 1.0 | 48 | 200 | 200 |
| 24.4387 | 0.5 | 118 | 8.8605 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.7305 | 0.51 | 120 | 8.8126 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.7305 | 0.51 | 122 | 8.7503 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.7305 | 0.52 | 124 | 8.6921 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.7305 | 0.53 | 126 | 8.6378 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.7305 | 0.54 | 128 | 8.5927 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.5989 | 0.55 | 130 | 8.5520 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.5989 | 0.56 | 132 | 8.5126 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.5989 | 0.56 | 134 | 8.4743 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.5989 | 0.57 | 136 | 8.4369 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.5989 | 0.58 | 138 | 8.3993 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.8372 | 0.59 | 140 | 8.3636 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.8372 | 0.6 | 142 | 8.3311 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.8372 | 0.61 | 144 | 8.2983 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.8372 | 0.62 | 146 | 8.2652 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.8372 | 0.62 | 148 | 8.2345 | 0.24 | 1.0 | 48 | 200 | 200 |
| 20.1716 | 0.63 | 150 | 8.2064 | 0.24 | 1.0 | 48 | 200 | 200 |
| 20.1716 | 0.64 | 152 | 8.1818 | 0.24 | 1.0 | 48 | 200 | 200 |
| 20.1716 | 0.65 | 154 | 8.1603 | 0.24 | 1.0 | 48 | 200 | 200 |
| 20.1716 | 0.66 | 156 | 8.1403 | 0.24 | 1.0 | 48 | 200 | 200 |
| 20.1716 | 0.67 | 158 | 8.1180 | 0.24 | 1.0 | 48 | 200 | 200 |
| 24.5655 | 0.67 | 160 | 8.0997 | 0.24 | 1.0 | 48 | 200 | 200 |
| 24.5655 | 0.68 | 162 | 8.0791 | 0.24 | 1.0 | 48 | 200 | 200 |
| 24.5655 | 0.69 | 164 | 8.0563 | 0.24 | 1.0 | 48 | 200 | 200 |
| 24.5655 | 0.7 | 166 | 8.0342 | 0.24 | 1.0 | 48 | 200 | 200 |
| 24.5655 | 0.71 | 168 | 8.0130 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.3768 | 0.72 | 170 | 7.9936 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.3768 | 0.72 | 172 | 7.9756 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.3768 | 0.73 | 174 | 7.9594 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.3768 | 0.74 | 176 | 7.9439 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.3768 | 0.75 | 178 | 7.9298 | 0.24 | 1.0 | 48 | 200 | 200 |
| 19.7473 | 0.76 | 180 | 7.9157 | 0.24 | 1.0 | 48 | 200 | 200 |
| 19.7473 | 0.77 | 182 | 7.9021 | 0.24 | 1.0 | 48 | 200 | 200 |
| 19.7473 | 0.78 | 184 | 7.8899 | 0.24 | 1.0 | 48 | 200 | 200 |
| 19.7473 | 0.78 | 186 | 7.8796 | 0.24 | 1.0 | 48 | 200 | 200 |
| 19.7473 | 0.79 | 188 | 7.8697 | 0.24 | 1.0 | 48 | 200 | 200 |
| 15.7279 | 0.8 | 190 | 7.8598 | 0.24 | 1.0 | 48 | 200 | 200 |
| 15.7279 | 0.81 | 192 | 7.8490 | 0.24 | 1.0 | 48 | 200 | 200 |
| 15.7279 | 0.82 | 194 | 7.8390 | 0.24 | 1.0 | 48 | 200 | 200 |
| 15.7279 | 0.83 | 196 | 7.8293 | 0.24 | 1.0 | 48 | 200 | 200 |
| 15.7279 | 0.83 | 198 | 7.8211 | 0.24 | 1.0 | 48 | 200 | 200 |
| 18.5034 | 0.84 | 200 | 7.8135 | 0.24 | 1.0 | 48 | 200 | 200 |
| 18.5034 | 0.85 | 202 | 7.8064 | 0.24 | 1.0 | 48 | 200 | 200 |
| 18.5034 | 0.86 | 204 | 7.7991 | 0.24 | 1.0 | 48 | 200 | 200 |
| 18.5034 | 0.87 | 206 | 7.7924 | 0.24 | 1.0 | 48 | 200 | 200 |
| 18.5034 | 0.88 | 208 | 7.7862 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.1983 | 0.89 | 210 | 7.7803 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.1983 | 0.89 | 212 | 7.7749 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.1983 | 0.9 | 214 | 7.7701 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.1983 | 0.91 | 216 | 7.7657 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.1983 | 0.92 | 218 | 7.7628 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.7276 | 0.93 | 220 | 7.7595 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.7276 | 0.94 | 222 | 7.7567 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.7276 | 0.94 | 224 | 7.7541 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.7276 | 0.95 | 226 | 7.7518 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.7276 | 0.96 | 228 | 7.7497 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.8692 | 0.97 | 230 | 7.7479 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.8692 | 0.98 | 232 | 7.7463 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.8692 | 0.99 | 234 | 7.7453 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.8692 | 0.99 | 236 | 7.7447 | 0.24 | 1.0 | 48 | 200 | 200 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
SYH99999/autotrain-translator-2261971987 | SYH99999 | 2022-11-28T15:30:31Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"translation",
"ja",
"en",
"dataset:SYH99999/autotrain-data-translator-3c03831c-5fcf2e86-839aa322-a7658498-cb30b55a-eefc0458",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | translation | 2022-11-28T11:53:31Z | ---
tags:
- autotrain
- translation
language:
- ja
- en
datasets:
- SYH99999/autotrain-data-translator-3c03831c-5fcf2e86-839aa322-a7658498-cb30b55a-eefc0458
co2_eq_emissions:
emissions: 234.5986254372695
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 2261971987
- CO2 Emissions (in grams): 234.5986
## Validation Metrics
- Loss: 4.237
- SacreBLEU: 0.697
- Gen len: 256.387 |
fathyshalab/all-roberta-large-v1-banking-2-2-1 | fathyshalab | 2022-11-28T15:28:40Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T15:27:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-2-2-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-2-2-1
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6817
- Accuracy: 0.1022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.653 | 1.0 | 5 | 2.6817 | 0.1022 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ConvLab/ddpt-policy-sgd_0.01multiwoz21 | ConvLab | 2022-11-28T15:24:29Z | 0 | 0 | null | [
"dialogue policy",
"task-oriented dialog",
"en",
"dataset:ConvLab/sgd",
"license:apache-2.0",
"region:us"
] | null | 2022-11-28T15:21:11Z | ---
language:
- en
license: apache-2.0
tags:
- dialogue policy
- task-oriented dialog
datasets:
- ConvLab/sgd
---
# ddpt-policy-sgd_0.01multiwoz21
This is a DDPT model (https://aclanthology.org/2022.coling-1.21/) trained on [Schema-Guided Dialog](https://huggingface.co/datasets/ConvLab/sgd) and afterwards on 1 percent of [MultiWOZ 2.1](https://huggingface.co/datasets/ConvLab/multiwoz21)
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- seed: 0
- optimizer: Adam
- num_epochs: 40
- use checkpoint which performed best on validation set
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu111
|
ConvLab/ddpt-policy-0.01multiwoz21 | ConvLab | 2022-11-28T15:20:35Z | 0 | 0 | null | [
"dialogue policy",
"task-oriented dialog",
"en",
"dataset:ConvLab/sgd",
"license:apache-2.0",
"region:us"
] | null | 2022-11-28T15:18:28Z | ---
language:
- en
license: apache-2.0
tags:
- dialogue policy
- task-oriented dialog
datasets:
- ConvLab/sgd
---
# ddpt-policy-0.01multiwoz21
This is a DDPT model (https://aclanthology.org/2022.coling-1.21/) trained on 1 percent of [MultiWOZ 2.1](https://huggingface.co/datasets/ConvLab/multiwoz21)
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- seed: 0
- optimizer: Adam
- num_epochs: 40
- use checkpoint which performed best on validation set
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu111
|
fathyshalab/all-roberta-large-v1-banking-1-2-1 | fathyshalab | 2022-11-28T15:12:05Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T15:10:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-1-2-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-1-2-1
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6235
- Accuracy: 0.2578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6542 | 1.0 | 3 | 2.6235 | 0.2578 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ConvLab/mle-policy-multiwoz21 | ConvLab | 2022-11-28T15:11:19Z | 0 | 0 | null | [
"dialogue policy",
"task-oriented dialog",
"en",
"dataset:ConvLab/multiwoz21",
"license:apache-2.0",
"region:us"
] | null | 2022-11-28T15:07:50Z | ---
language:
- en
license: apache-2.0
tags:
- dialogue policy
- task-oriented dialog
datasets:
- ConvLab/multiwoz21
---
# mle-policy-multiwoz21
This is a MLE model trained on [MultiWOZ 2.1](https://huggingface.co/datasets/ConvLab/multiwoz21)
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- seed: 0
- optimizer: Adam
- num_epochs: 24
- use checkpoint which performed best on validation set
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu111
|
ConvLab/ddpt-policy-sgd | ConvLab | 2022-11-28T15:01:15Z | 0 | 1 | null | [
"dialogue policy",
"task-oriented dialog",
"en",
"dataset:ConvLab/sgd",
"license:apache-2.0",
"region:us"
] | null | 2022-11-28T13:21:09Z | ---
language:
- en
license: apache-2.0
tags:
- dialogue policy
- task-oriented dialog
datasets:
- ConvLab/sgd
---
# ddpt-policy-sgd
This is a DDPT model (https://aclanthology.org/2022.coling-1.21/) trained on [Schema-Guided Dialog](https://huggingface.co/datasets/ConvLab/sgd)
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- seed: 0
- optimizer: Adam
- num_epochs: 1
- use checkpoint which performed best on validation set
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu111
|
regel-corpus/hunflair-tfbs | regel-corpus | 2022-11-28T14:37:52Z | 3 | 0 | flair | [
"flair",
"pytorch",
"hunflair",
"token-classification",
"sequence-tagger-model",
"en",
"region:us"
] | token-classification | 2022-03-29T11:26:41Z | ---
tags:
- flair
- hunflair
- token-classification
- sequence-tagger-model
language: en
widget:
- text: "It contains a functional GCGGCGGCG Egr-1-binding site"
---
## HunFlair model for Transcription Factor Binding Site (TFBS)
[HunFlair](https://github.com/flairNLP/flair/blob/master/resources/docs/HUNFLAIR.md) (biomedical flair) for TFBS entity.
Predicts 1 tag:
| **tag** | **meaning** |
|---------------------------------|-----------|
| Tfbs | DNA region bound by transcription factor |
---
### Cite
Please cite the following paper when using this model.
```
@article{garda2022regel,
title={RegEl corpus: identifying DNA regulatory elements in the scientific literature},
author={Garda, Samuele and Lenihan-Geels, Freyda and Proft, Sebastian and Hochmuth, Stefanie and Sch{\"u}lke, Markus and Seelow, Dominik and Leser, Ulf},
journal={Database},
volume={2022},
year={2022},
publisher={Oxford Academic}
}
```
---
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# for biomedical-specific tokenization:
# from flair.tokenization import SciSpacyTokenizer
# load tagger
tagger = SequenceTagger.load("regel-corpus/hunflair-tfbs")
text = "We found that Egr-1 specifically binds to the PTEN 5' untranslated region, which contains a functional GCGGCGGCG Egr-1-binding site."
# make example sentence
sentence = Sentence(text)
# for biomedical-specific tokenization:
# sentence = Sentence(text, use_tokenizer=SciSpacyTokenizer())
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [19,20,21]: "GCGGCGGCG Egr-1-binding site" [− Labels: Tfbs (0.9631)]
```
So, the entity "*GCGGCGGCG Egr-1-binding site*" is found in the sentence.
Alternatively download all models locally and use the `MultiTagger` class.
```python
from flair.models import MultiTagger
tagger = [
'./models/hunflair-promoter/pytorch_model.bin',
'./models/hunflair-enhancer/pytorch_model.bin',
'./models/hunflair-tfbs/pytorch_model.bin',
]
tagger = MultiTagger.load(['./models/hunflair-'])
tagger.predict(sentence)
```
---
|
regel-corpus/hunflair-enhancer | regel-corpus | 2022-11-28T14:37:03Z | 4 | 0 | flair | [
"flair",
"pytorch",
"hunflair",
"token-classification",
"sequence-tagger-model",
"en",
"region:us"
] | token-classification | 2022-03-29T09:09:18Z | ---
tags:
- flair
- hunflair
- token-classification
- sequence-tagger-model
language: en
widget:
- text: "Isolate an enhancer element located between -89 and -50 bp in PAI-1"
---
## HunFlair model for ENHANCER
[HunFlair](https://github.com/flairNLP/flair/blob/master/resources/docs/HUNFLAIR.md) (biomedical flair) for enhancer entity.
Predicts 1 tag:
| **tag** | **meaning** |
|---------------------------------|-----------|
| Enhancer | DNA enhancer region |
---
### Cite
Please cite the following paper when using this model.
```
@article{garda2022regel,
title={RegEl corpus: identifying DNA regulatory elements in the scientific literature},
author={Garda, Samuele and Lenihan-Geels, Freyda and Proft, Sebastian and Hochmuth, Stefanie and Sch{\"u}lke, Markus and Seelow, Dominik and Leser, Ulf},
journal={Database},
volume={2022},
year={2022},
publisher={Oxford Academic}
}
```
---
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# for biomedical-specific tokenization:
# from flair.tokenization import SciSpacyTokenizer
# load tagger
tagger = SequenceTagger.load("regel-corpus/hunflair-promoter")
text = "An upstream activator of the mitogen-activated protein (MAP) kinase pathways was used to isolate an enhancer element located between -89 and -50 bp in PAI-1 promoter that was activated by MEKK-1."
# make example sentence
sentence = Sentence(text)
# for biomedical-specific tokenization:
# sentence = Sentence(text, use_tokenizer=SciSpacyTokenizer())
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [18,19,20,21,22,23,24,25,26,27,28,29,30]: "enhancer element located between - 89 and - 50 bp in PAI-1 promoter" [− Labels: Enhancer (0.992)]
```
So, the entity "*enhancer element located between - 89 and - 50 bp in PAI-1*" (labeled as a **enhancer**) is found in the sentence.
Alternatively download all models locally and use the `MultiTagger` class.
```python
from flair.models import MultiTagger
tagger = [
'./models/hunflair-promoter/pytorch_model.bin',
'./models/hunflair-enhancer/pytorch_model.bin',
'./models/hunflair-tfbs/pytorch_model.bin',
]
tagger = MultiTagger.load(['./models/hunflair-'])
tagger.predict(sentence)
```
---
|
regel-corpus/hunflair-promoter | regel-corpus | 2022-11-28T14:36:20Z | 7 | 0 | flair | [
"flair",
"pytorch",
"hunflair",
"token-classification",
"sequence-tagger-model",
"en",
"region:us"
] | token-classification | 2022-03-29T11:22:27Z | ---
tags:
- flair
- hunflair
- token-classification
- sequence-tagger-model
language: en
widget:
- text: "Two putative extended promoters consensus sequences (p1 and p2)."
---
## HunFlair model for PROMOTER
[HunFlair](https://github.com/flairNLP/flair/blob/master/resources/docs/HUNFLAIR.md) (biomedical flair) for promoter entity.
Predicts 1 tag:
| **tag** | **meaning** |
|---------------------------------|-----------|
| Promoter | DNA promoter region |
---
### Cite
Please cite the following paper when using this model.
```
@article{garda2022regel,
title={RegEl corpus: identifying DNA regulatory elements in the scientific literature},
author={Garda, Samuele and Lenihan-Geels, Freyda and Proft, Sebastian and Hochmuth, Stefanie and Sch{\"u}lke, Markus and Seelow, Dominik and Leser, Ulf},
journal={Database},
volume={2022},
year={2022},
publisher={Oxford Academic}
}
```
---
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# for biomedical-specific tokenization:
# from flair.tokenization import SciSpacyTokenizer
# load tagger
tagger = SequenceTagger.load("regel-corpus/hunflair-promoter")
text = "The upstream region of the glnA gene contained two putative extended promoter consensus sequences (p1 and p2)."
# make example sentence
sentence = Sentence(text)
# for biomedical-specific tokenization:
# sentence = Sentence(text, use_tokenizer=SciSpacyTokenizer())
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [16]: "p1" [− Labels: Promoter (0.9878)]
Span [18]: "p2" [− Labels: Promoter (0.9216)]
```
So, the entities "*p1*" and "*p2*" (labeled as a **promoter**) are found in the sentence.
Alternatively download all models locally and use the `MultiTagger` class.
```python
from flair.models import MultiTagger
tagger = [
'./models/hunflair-promoter/pytorch_model.bin',
'./models/hunflair-enhancer/pytorch_model.bin',
'./models/hunflair-tfbs/pytorch_model.bin',
]
tagger = MultiTagger.load(['./models/hunflair-'])
tagger.predict(sentence)
```
|
fathyshalab/all-roberta-large-v1-banking-1 | fathyshalab | 2022-11-28T14:25:57Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T14:24:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-1
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6515
- Accuracy: 0.1644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.5795 | 1.0 | 3 | 2.6515 | 0.1644 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fathyshalab/bert-uncased-massive-intent-classification-banking-1 | fathyshalab | 2022-11-28T14:15:57Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T14:11:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-uncased-massive-intent-classification-banking-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-uncased-massive-intent-classification-banking-1
This model is a fine-tuned version of [gokuls/bert-uncased-massive-intent-classification](https://huggingface.co/gokuls/bert-uncased-massive-intent-classification) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7010
- Accuracy: 0.1289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6675 | 1.0 | 3 | 2.7010 | 0.1289 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Fabiuas/test | Fabiuas | 2022-11-28T14:01:56Z | 187 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-11-28T13:42:00Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: test
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# test
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### cat

#### dog
 |
fathyshalab/bert-uncased-massive-intent-classification_banking-1 | fathyshalab | 2022-11-28T13:48:29Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T13:40:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-uncased-massive-intent-classification_banking-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-uncased-massive-intent-classification_banking-1
This model is a fine-tuned version of [gokuls/bert-uncased-massive-intent-classification](https://huggingface.co/gokuls/bert-uncased-massive-intent-classification) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6770
- Accuracy: 0.1378
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8977 | 1.0 | 3 | 2.7353 | 0.0622 |
| 2.5889 | 2.0 | 6 | 2.7109 | 0.0933 |
| 2.4362 | 3.0 | 9 | 2.6940 | 0.1111 |
| 2.3175 | 4.0 | 12 | 2.6817 | 0.1333 |
| 2.2524 | 5.0 | 15 | 2.6770 | 0.1378 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fathyshalab/bert-uncased-massive-intent-classification-finetuned-banking | fathyshalab | 2022-11-28T12:54:50Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T11:50:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-uncased-massive-intent-classification-finetuned-banking
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-uncased-massive-intent-classification-finetuned-banking
This model is a fine-tuned version of [gokuls/bert-uncased-massive-intent-classification](https://huggingface.co/gokuls/bert-uncased-massive-intent-classification) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5965
- Accuracy: 0.12
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.731 | 1.0 | 3 | 2.6423 | 0.1067 |
| 2.4424 | 2.0 | 6 | 2.6178 | 0.1067 |
| 2.2005 | 3.0 | 9 | 2.6028 | 0.1111 |
| 2.1954 | 4.0 | 12 | 2.5965 | 0.12 |
| 2.0599 | 5.0 | 15 | 2.5935 | 0.12 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
minhtoan/t5-small-vietnamese-news | minhtoan | 2022-11-28T12:52:14Z | 122 | 4 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"vi",
"dataset:Wikilingua",
"dataset:Vietnews",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2022-11-24T08:01:28Z | ---
language: vi
datasets:
- Wikilingua
- Vietnews
tags:
- summarization
license: mit
widget:
- text: 'VKS cáo buộc ông Nguyễn Thế Hiệp có sai phạm trong vụ cháy gần Bệnh viện Nhi trung ương khiến 2 người chết, thiệt hại 1,9 tỷ đồng song bị cáo khẳng định vô tội. Mức án đề nghị 9-10 năm tù với bị cáo 73 tuổi được đại diện VKSND quận Ba Đình đưa ra chiều 28/11, quy buộc phạm tội Vi phạm quy định về phòng cháy chữa cháy, theo Điều 313 Bộ luật Hình sự. VKS nhận định ông Hiệp có lỗi trong việc vận hành nhà trọ không phép, không đủ điều kiện an toàn phòng cháy chữa cháy, gây thiệt hại về tài sản và khiến hai người chết. Tuy nhiên, bị cáo chưa bồi thường. Bản luận tội nêu, tại phiên tòa hôm nay ông Hiệp "chưa tỏ thái độ ăn năn hối hận, có nhân thân đặc biệt xấu". Từ hàng chục năm trước, ông từng 11 lần bị lập danh chỉ bản về hành vi trộm cắp, năm 1985 lại nhận 18 năm tù về các tội cướp tài sản, hiếp dâm, đưa hối lộ...'
inference:
parameters:
max_length: 150
---
# Text summarization for Vietnamese Language
State-of-the-art lightweights pretrained Transformer-based encoder-decoder model for Vietnamese.
Model trained on dataset Vietnamese News with input length = 512, output length = 150
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# Example test data on VNExpress: https://vnexpress.net/ong-hiep-khung-khong-nhan-toi-trong-vu-chay-gan-benh-vien-nhi-4541483.html
tokenizer = AutoTokenizer.from_pretrained("minhtoan/t5-small-vietnamese-news")
model = AutoModelForSeq2SeqLM.from_pretrained("minhtoan/t5-small-vietnamese-news")
model.cuda()
src = 'VKS cáo buộc ông Nguyễn Thế Hiệp có sai phạm trong vụ cháy gần Bệnh viện Nhi trung ương khiến 2 người chết, thiệt hại 1,9 tỷ đồng song bị cáo khẳng định vô tội. Mức án đề nghị 9-10 năm tù với bị cáo 73 tuổi được đại diện VKSND quận Ba Đình đưa ra chiều 28/11, quy buộc phạm tội Vi phạm quy định về phòng cháy chữa cháy, theo Điều 313 Bộ luật Hình sự. VKS nhận định ông Hiệp có lỗi trong việc vận hành nhà trọ không phép, không đủ điều kiện an toàn phòng cháy chữa cháy, gây thiệt hại về tài sản và khiến hai người chết. Tuy nhiên, bị cáo chưa bồi thường. Bản luận tội nêu, tại phiên tòa hôm nay ông Hiệp "chưa tỏ thái độ ăn năn hối hận, có nhân thân đặc biệt xấu". Từ hàng chục năm trước, ông từng 11 lần bị lập danh chỉ bản về hành vi trộm cắp, năm 1985 lại nhận 18 năm tù về các tội cướp tài sản, hiếp dâm, đưa hối lộ...'
tokenized_text = tokenizer.encode(src, return_tensors="pt").cuda()
model.eval()
summary_ids = model.generate(tokenized_text, max_length=150)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
output
```
## Author
`
Phan Minh Toan
` |
team-nave/distilbert-base-uncased-distilled-clinc | team-nave | 2022-11-28T12:14:29Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T12:06:14Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9367741935483871
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4175
- Accuracy: 0.9368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 159 | 3.3516 | 0.6652 |
| 3.4274 | 2.0 | 318 | 2.2866 | 0.7848 |
| 3.4274 | 3.0 | 477 | 1.5064 | 0.8545 |
| 1.6307 | 4.0 | 636 | 1.0204 | 0.8971 |
| 1.6307 | 5.0 | 795 | 0.7421 | 0.9177 |
| 0.7641 | 6.0 | 954 | 0.5838 | 0.9258 |
| 0.7641 | 7.0 | 1113 | 0.4986 | 0.9306 |
| 0.4482 | 8.0 | 1272 | 0.4489 | 0.9365 |
| 0.4482 | 9.0 | 1431 | 0.4258 | 0.9368 |
| 0.3442 | 10.0 | 1590 | 0.4175 | 0.9368 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
tomekkorbak/zealous_almeida | tomekkorbak | 2022-11-28T12:04:20Z | 0 | 0 | null | [
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | 2022-11-28T12:04:13Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: zealous_almeida
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zealous_almeida
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'filter_threshold': 0.00078,
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'zealous_almeida',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/llhbsik2 |
cardiffnlp/twitter-roberta-base-offensive | cardiffnlp | 2022-11-28T11:36:23Z | 35,866 | 27 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"arxiv:2010.12421",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | # Twitter-roBERTa-base for Offensive Language Identification
This is a roBERTa-base model trained on ~58M tweets and finetuned for offensive language identification with the TweetEval benchmark.
- Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf).
- Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval).
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='offensive'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Good night 😊"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) not-offensive 0.9073
2) offensive 0.0927
```
|
biu-nlp/f-coref | biu-nlp | 2022-11-28T11:35:52Z | 88,201 | 18 | transformers | [
"transformers",
"pytorch",
"roberta",
"fast",
"coreference-resolution",
"en",
"dataset:multi_news",
"dataset:ontonotes",
"arxiv:2209.04280",
"arxiv:2205.12644",
"arxiv:1907.10529",
"arxiv:2101.00434",
"arxiv:2109.04127",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-08-19T12:01:10Z | ---
language:
- en
tags:
- fast
- coreference-resolution
license: mit
datasets:
- multi_news
- ontonotes
metrics:
- CoNLL
task_categories:
- coreference-resolution
model-index:
- name: biu-nlp/f-coref
results:
- task:
type: coreference-resolution
name: coreference-resolution
dataset:
name: ontonotes
type: coreference
metrics:
- name: Avg. F1
type: CoNLL
value: 78.5
---
## F-Coref: Fast, Accurate and Easy to Use Coreference Resolution
[F-Coref](https://arxiv.org/abs/2209.04280) allows to process 2.8K OntoNotes documents in 25 seconds on a V100 GPU (compared to 6 minutes for the [LingMess](https://arxiv.org/abs/2205.12644) model, and to 12 minutes of the popular AllenNLP coreference model) with only a modest drop in accuracy.
The fast speed is achieved through a combination of distillation of a compact model from the LingMess model, and an efficient batching implementation using a technique we call leftover
Please check the [official repository](https://github.com/shon-otmazgin/fastcoref) for more details and updates.
#### Experiments
| Model | Runtime | Memory |
|-----------------------|---------|---------|
| [Joshi et al. (2020)](https://arxiv.org/abs/1907.10529) | 12:06 | 27.4 |
| [Otmazgin et al. (2022)](https://arxiv.org/abs/2205.12644) | 06:43 | 4.6 |
| + Batching | 06:00 | 6.6 |
| [Kirstain et al. (2021)](https://arxiv.org/abs/2101.00434) | 04:37 | 4.4 |
| [Dobrovolskii (2021)](https://arxiv.org/abs/2109.04127) | 03:49 | 3.5 |
| [F-Coref](https://arxiv.org/abs/2209.04280) | 00:45 | 3.3 |
| + Batching | 00:35 | 4.5 |
| + Leftovers batching | 00:25 | 4.0 |
The inference time(Min:Sec) and memory(GiB) for each model on 2.8K documents. Average of 3 runs. Hardware, NVIDIA Tesla V100 SXM2.
### Citation
```
@inproceedings{Otmazgin2022FcorefFA,
title={F-coref: Fast, Accurate and Easy to Use Coreference Resolution},
author={Shon Otmazgin and Arie Cattan and Yoav Goldberg},
booktitle={AACL},
year={2022}
}
```
[F-coref: Fast, Accurate and Easy to Use Coreference Resolution](https://aclanthology.org/2022.aacl-demo.6) (Otmazgin et al., AACL-IJCNLP 2022) |
projecte-aina/roberta-base-ca-v2-cased-wikicat-ca | projecte-aina | 2022-11-28T11:03:27Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"catalan",
"text classification",
"WikiCAT_ca",
"CaText",
"Catalan Textual Corpus",
"ca",
"dataset:projecte-aina/WikiCAT_ca",
"arxiv:1907.11692",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-11T12:00:58Z | ---
language:
- ca
license: apache-2.0
tags:
- "catalan"
- "text classification"
- "WikiCAT_ca"
- "CaText"
- "Catalan Textual Corpus"
datasets:
- "projecte-aina/WikiCAT_ca"
metrics:
- f1
model-index:
- name: roberta-base-ca-v2-cased-wikicat-ca
results:
- task:
type: text-classification
dataset:
type: projecte-aina/WikiCAT_ca
name: WikiCAT_ca
metrics:
- name: F1
type: f1
value: 77.823
widget:
- text: "La ressonància magnètica és una prova diagnòstica clau per a moltes malalties."
- text: "Les tres idees bàsiques del noümen són l'ànima, el món i Déu, i és una continuació de les tres substàncies de Descartes (tot i que el francès anomenava jo o ment l'ànima)."
---
# Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for Viquipedia-based Text Classification.
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-uses-and-limitations)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Variable and metrics](#variable-and-metrics)
- [Evaluation results](#evaluation-results)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Disclaimer](#disclaimer)
</details>
## Model description
The **roberta-base-ca-v2-cased-wikicat-ca** is a Text Classification model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).
Dataset used is https://huggingface.co/datasets/projecte-aina/WikiCAT_ca, automatically created from Wikipedia and Wikidata sources
## Intended uses and limitations
**roberta-base-ca-v2-cased-wikicat-ca** model can be used to classify texts. The model is limited by its training dataset and may not generalize well for all use cases.
## How to use
Here is how to use this model:
```python
from transformers import pipeline
from pprint import pprint
nlp = pipeline("text-classification", model="roberta-base-ca-v2-cased-wikicat-ca")
example = "La ressonància magnètica és una prova diagnòstica clau per a moltes malalties."
tc_results = nlp(example)
pprint(tc_results)
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
### Training data
We used the TC dataset in Catalan called [WikiCAT_ca](https://huggingface.co/datasets/projecte-aina/WikiCAT_ca) for training and evaluation.
### Training procedure
The model was trained with a batch size of 16 and three learning rates (1e-5, 3e-5, 5e-5) for 10 epochs. We then selected the best learning rate (3e-5) and checkpoint (epoch 3, step 1857) using the downstream task metric in the corresponding development set.
## Evaluation
### Variable and metrics
This model was finetuned maximizing F1 (weighted) score.
### Evaluation results
We evaluated the _roberta-base-ca-v2-cased-wikicat-ca_ on the WikiCAT_ca dev set:
| Model | WikiCAT_ca (F1)|
| ------------|:-------------|
| roberta-base-ca-v2-cased-wikicat-ca | 77.823 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected])
### Contact information
For further information, send an email to [email protected]
### Copyright
Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
## Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
|
projecte-aina/roberta-base-ca-v2-cased-tc | projecte-aina | 2022-11-28T11:02:09Z | 110 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"catalan",
"text classification",
"tecla",
"CaText",
"Catalan Textual Corpus",
"ca",
"dataset:projecte-aina/tecla",
"arxiv:1907.11692",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-30T07:55:23Z | ---
language:
- ca
tags:
- "catalan"
- "text classification"
- "tecla"
- "CaText"
- "Catalan Textual Corpus"
datasets:
- "projecte-aina/tecla"
metrics:
- accuracy
model-index:
- name: roberta-base-ca-v2-cased-tc
results:
- task:
type: text-classification
dataset:
name: TeCla
type: projecte-aina/tecla
metrics:
- name: Accuracy
type: accuracy
value: 0.8034
widget:
- text: "Els Pets presenten el seu nou treball al Palau Sant Jordi."
- text: "Els barcelonins incrementen un 23% l’ús del cotxe des de l’inici de la pandèmia."
- text: "Retards a quatre línies de Rodalies per una avaria entre Sants i plaça de Catalunya."
- text: "Majors de 60 anys i sanitaris començaran a rebre la tercera dosi de la vacuna covid els propers dies."
- text: "Els cinemes Verdi estrenen Verdi Classics, un nou canal de televisió."
---
# Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for TeCla-based Text Classification.
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Tokenization](#tokenization)
- [Hyperparameters](#hyperparameters)
- [Evaluation](#evaluation)
- [Variable and metrics](#variable-and-metrics)
- [Evaluation results](#evaluation-results)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
The **roberta-base-ca-v2-cased-tc** is a Text Classification (TC) model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).
The previous version of this model, which was trained on the old TeCla dataset (v1), can still be accessed through the "v1" tag.
## Intended uses and limitations
**roberta-base-ca-v2-cased-tc** model can be used to classify texts. The model is limited by its training dataset and may not generalize well for all use cases.
## How to use
Here is how to use this model:
```python
from transformers import pipeline
from pprint import pprint
nlp = pipeline("text-classification", model="projecte-aina/roberta-base-ca-v2-cased-tc")
example = "Retards a quatre línies de Rodalies per una avaria entre Sants i plaça de Catalunya."
tc_results = nlp(example)
pprint(tc_results)
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
### Training data
We used the TC dataset in Catalan called [TeCla](https://huggingface.co/datasets/projecte-aina/tecla) for training and evaluation. Although TeCla includes a coarse-grained ('label1') and a fine-grained categorization ('label2'), only the last one, with 53 classes, was used for the training.
### Training procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
## Evaluation
### Variable and metrics
This model was finetuned maximizing F1 (weighted).
## Evaluation results
We evaluated the _roberta-base-ca-v2-cased-tc_ on the TeCla test set against standard multilingual and monolingual baselines. The results for 'label1' categories were obtained through a mapping from the fine-grained category ('label2') to the corresponding coarse-grained one ('label1').
| Model | TeCla - label1 (Accuracy) | TeCla - label2 (Accuracy) |
| ------------|:-------------|:-------------|
| roberta-base-ca-v2 | 96.31 | 80.34 |
| roberta-large-ca-v2 | **96.51** | **80.68** |
| mBERT | 95.72 | 78.47 |
| XLM-RoBERTa | 95.66 | 78.01 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected])
### Contact information
For further information, send an email to [email protected]
### Copyright
Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
## Citation Information
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
### Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
|
JapaNLP/t5-efficient-xl-nl6-japanese | JapaNLP | 2022-11-28T10:09:07Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-28T09:58:01Z | ---
license: afl-3.0
---
# Overview
`t5-efficient-xl-nl6-ja` is a Japanese version of [`google/t5-efficient-xl-nl6`](https://huggingface.co/google/t5-efficient-xl-nl6).
# Results
- Under construction
- If you get some experimental results of this model on downstream tasks, please feel free to make Pull Requests.
## Question Answering
## Others
# Acknowledgement
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) |
mn367/radio-mlm | mn367 | 2022-11-28T09:52:57Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-11-28T09:42:20Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: mn367/radio-mlm
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mn367/radio-mlm
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.6630
- Validation Loss: 4.6014
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 39000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.6630 | 4.6014 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
pkachhad/t5-small-finetuned-parth | pkachhad | 2022-11-28T09:19:48Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-28T07:51:00Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-parth
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-parth
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9468
- Rouge1: 26.5826
- Rouge2: 21.7867
- Rougel: 25.1629
- Rougelsum: 26.2364
- Gen Len: 16.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 4 | 3.3692 | 25.2983 | 20.639 | 24.0087 | 25.0732 | 16.2 |
| No log | 2.0 | 8 | 3.1818 | 25.4926 | 20.9783 | 24.0651 | 25.2635 | 16.3 |
| No log | 3.0 | 12 | 3.0498 | 26.2652 | 21.5076 | 24.8077 | 25.9478 | 16.65 |
| No log | 4.0 | 16 | 2.9742 | 26.5826 | 21.7867 | 25.1629 | 26.2364 | 16.9 |
| No log | 5.0 | 20 | 2.9468 | 26.5826 | 21.7867 | 25.1629 | 26.2364 | 16.9 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
huggingtweets/bobkerns | huggingtweets | 2022-11-28T08:14:20Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-28T08:14:12Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/3653376550/f40f9602f2e8e185eb7ddce332157ffe_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bob (Moderna #5) Kerns</div>
<div style="text-align: center; font-size: 14px;">@bobkerns</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Bob (Moderna #5) Kerns.
| Data | Bob (Moderna #5) Kerns |
| --- | --- |
| Tweets downloaded | 3234 |
| Retweets | 315 |
| Short tweets | 42 |
| Tweets kept | 2877 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/390ksfue/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bobkerns's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3me25qi0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3me25qi0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bobkerns')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
pere/whisper-NST2-unfreeze-constanti-low-lr | pere | 2022-11-28T07:41:42Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-23T10:34:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-NST2-unfreeze-constanti-low-lr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-NST2-unfreeze-constanti-low-lr
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3562
- Wer: 8.5519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 96
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 20000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.1901 | 0.05 | 1000 | 0.3069 | 14.8233 |
| 0.1323 | 0.1 | 2000 | 0.2687 | 11.2885 |
| 0.1137 | 0.15 | 3000 | 0.2620 | 10.8324 |
| 0.1022 | 0.2 | 4000 | 0.2976 | 9.0080 |
| 0.0937 | 0.25 | 5000 | 0.2584 | 9.5781 |
| 0.0875 | 0.3 | 6000 | 0.2704 | 20.2965 |
| 0.0592 | 1.05 | 7000 | 0.2751 | 9.0080 |
| 0.0488 | 1.1 | 8000 | 0.2778 | 8.6659 |
| 0.0475 | 1.15 | 9000 | 0.2792 | 9.4641 |
| 0.0439 | 1.2 | 10000 | 0.2880 | 8.3238 |
| 0.0425 | 1.25 | 11000 | 0.2954 | 8.5519 |
| 0.0416 | 1.3 | 12000 | 0.2896 | 20.2965 |
| 0.0289 | 2.05 | 13000 | 0.2990 | 7.9818 |
| 0.0229 | 2.1 | 14000 | 0.3027 | 7.4116 |
| 0.0248 | 2.15 | 15000 | 0.2968 | 8.6659 |
| 0.0225 | 2.2 | 16000 | 0.3100 | 8.5519 |
| 0.0222 | 2.25 | 17000 | 0.3132 | 9.3501 |
| 0.0219 | 2.3 | 18000 | 0.3230 | 7.6397 |
| 0.0162 | 3.04 | 19000 | 0.3380 | 9.8062 |
| 0.0132 | 3.09 | 20000 | 0.3562 | 8.5519 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
linfuyou/bert-squad-training | linfuyou | 2022-11-28T07:41:14Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-11-15T09:15:55Z | bert-base-cased-squadv1.1-training |
Shubham09/whispertestlocal | Shubham09 | 2022-11-28T06:40:40Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-25T09:13:41Z | ---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whispertestlocal
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whispertestlocal
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4481
- Wer: 46.1754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1886 | 1.12 | 100 | 0.4481 | 46.1754 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
amagzari/pegasus-cnn_dailymail-finetuned-samsum-v2 | amagzari | 2022-11-28T05:20:40Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-28T03:55:08Z | ---
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: pegasus-cnn_dailymail-finetuned-samsum-v2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: train
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 45.3045
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-cnn_dailymail-finetuned-samsum-v2
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5218
- Rouge1: 45.3045
- Rouge2: 21.7601
- Rougel: 35.8643
- Rougelsum: 41.6595
- Gen Len: 35.4425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.6997 | 1.0 | 1841 | 1.5218 | 45.3045 | 21.7601 | 35.8643 | 41.6595 | 35.4425 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
inkoziev/sbert_pq | inkoziev | 2022-11-28T04:45:46Z | 309 | 16 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"ru",
"license:unlicense",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-10-17T13:27:40Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language: ru
license: unlicense
widget:
- source_sentence: "Кошка ловит мышку."
sentences: ["Кто ловит мышку?", "Где живет кошка?", "Как мышку зовут?"]
---
# SBERT_PQ
Это [sentence-transformers](https://www.SBERT.net) модель, предназначенная
для определения релевантности короткого текста (преимущественно одно предложение длиной до 10-15 слов) и вопроса.
Модель вычисляет для текста и вопроса векторы размерностью 312. Косинус угла между этими векторами
дает оценку того, содержит ли текст ответ на заданный вопрос. В [проекте диалоговой системы](https://github.com/Koziev/chatbot)
она используется для семантического поиска записей в базе фактов по заданному собеседником вопросу.
# Скорость и точность
Модель основана на [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2).
Она имеет очень небольшой размер и быстро выполняет инференс даже на CPU.
Максимальное значение метрики cossim_f1 на тестовой выборке (10% датасета) равно **0.986**.
При использовании модели sberbank-ai/ruBert-base в качестве базовой, максимум cossim_f1 составляет **0.992**.
## Использование с библиотекой (Sentence-Transformers)
Необходимо установить [sentence-transformers](https://www.SBERT.net):
```
pip install -U sentence-transformers
```
Чтобы определить релевантность в одной паре "текст-вопрос", можно использовать такой код:
```
import sentence_transformers
sentences = ["Кошка ловит мышку.", "Чем занята кошка?"]
model = sentence_transformers.SentenceTransformer('inkoziev/sbert_pq')
embeddings = model.encode(sentences)
s = sentence_transformers.util.cos_sim(a=embeddings[0], b=embeddings[1])
print('text={} question={} cossim={}'.format(sentences[0], sentences[1], s))
```
## Контакты и цитирование
```
@MISC{rugpt_chitchat,
author = {Ilya Koziev},
title = {Texts & Questions Relevancy Model},
url = {https://huggingface.co/inkoziev/sbert_pq},
year = 2022
}
```
|
cavitcakir/swin-tiny-patch4-window7-224-finetuned-eurosat | cavitcakir | 2022-11-28T04:30:00Z | 206 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-11-28T04:24:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5373
- Accuracy: 0.7639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6855 | 0.98 | 10 | 0.6436 | 0.625 |
| 0.6499 | 1.98 | 20 | 0.5745 | 0.7083 |
| 0.6021 | 2.98 | 30 | 0.5373 | 0.7639 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
speedrunner/atitanstrawberry | speedrunner | 2022-11-28T03:30:00Z | 0 | 4 | null | [
"region:us"
] | null | 2022-11-28T00:55:15Z | not my work - all credit to original author! |
thisisHJLee/wav2vec2-large-xls-r-1b-korean-sample2 | thisisHJLee | 2022-11-28T02:25:48Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-25T04:56:38Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-1b-korean-sample2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-korean-sample2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1283
- Cer: 0.0294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3415 | 1.0 | 11471 | 0.2666 | 0.0750 |
| 0.1997 | 2.0 | 22942 | 0.1617 | 0.0415 |
| 0.1153 | 3.0 | 34413 | 0.1283 | 0.0294 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.11.0
|
minimaxir/midjourney_sd_2_0 | minimaxir | 2022-11-28T02:13:38Z | 0 | 12 | null | [
"license:mit",
"region:us"
] | null | 2022-11-28T02:04:00Z | ---
license: mit
---
### Midjourney Style for Stable Diffusion 2.0
A textual inversion embedding for the `<midjourney>` token, adapted for Stable Diffusion 2.0 from [sd-concepts-library/midjourney-style](https://huggingface.co/sd-concepts-library/midjourney-style).
It's recommended to use the following as an addition to a prompt:
```txt
in the style of <midjourney>
```
|
huggingtweets/tarunchitra | huggingtweets | 2022-11-28T02:11:02Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-28T02:09:42Z | ---
language: en
thumbnail: http://www.huggingtweets.com/tarunchitra/1669601459083/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1587539091444432897/Z6_nmrCB_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tarun Chitra</div>
<div style="text-align: center; font-size: 14px;">@tarunchitra</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tarun Chitra.
| Data | Tarun Chitra |
| --- | --- |
| Tweets downloaded | 3234 |
| Retweets | 439 |
| Short tweets | 362 |
| Tweets kept | 2433 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ex37piz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tarunchitra's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/12p1kbwc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/12p1kbwc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tarunchitra')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
lewispons/large-email-classifier | lewispons | 2022-11-28T01:56:52Z | 2 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-11-26T22:47:23Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{lewispons/large-email-classifier}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 752 with parameters:
```
{'batch_size': 50, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2256,
"warmup_steps": 226,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
fanpu/final_model_output_subreddit-wallstreetbets_3 | fanpu | 2022-11-28T01:42:49Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-27T19:02:43Z | ---
tags:
- generated_from_trainer
model-index:
- name: final_model_output_subreddit-wallstreetbets_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final_model_output_subreddit-wallstreetbets_3
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2588 | 1.25 | 5000 | 3.6824 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
erkanxyzalaca/turkishReviews-ds-mini | erkanxyzalaca | 2022-11-28T01:38:07Z | 61 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-27T22:00:36Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: turkishReviews-ds-mini
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# turkishReviews-ds-mini
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.3867
- Validation Loss: 8.3741
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -765, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2149 | 9.6891 | 0 |
| 9.0695 | 8.7610 | 1 |
| 8.3867 | 8.3741 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.10.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ohrenn/lorepass | ohrenn | 2022-11-28T00:28:39Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-11-28T00:28:39Z | ---
license: bigscience-bloom-rail-1.0
---
|
Tara2301/PPO-LunarLander-v22 | Tara2301 | 2022-11-27T23:31:17Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-11-27T22:02:04Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.24 +/- 19.64
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
wnordmann/klaus_weights | wnordmann | 2022-11-27T23:23:06Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2022-11-27T18:39:46Z | ---
license: openrail
---
# Klaus the Cat
This is trained on 30+ picture of my Cat Klaus. Klaus has Manx Syndrom which means he has no tail and limited feeling in his legs. He's a super cute yellow kitten my family loves
## Prompt
`nord klaus` |
jasoneden/BLOOM-560-QuestionAnswering-CDC-Covid19-Tuned | jasoneden | 2022-11-27T23:16:23Z | 48 | 1 | transformers | [
"transformers",
"pytorch",
"bloom",
"question-answering",
"generated_from_trainer",
"dataset:dataset",
"license:bigscience-bloom-rail-1.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-11-24T04:56:03Z | ---
license: bigscience-bloom-rail-1.0
tags:
- generated_from_trainer
datasets:
- dataset
model-index:
- name: cdcmodel_train02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cdcmodel_train02
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the dataset dataset. Currently will not load.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.12.1+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Ueumol/Utapri_Style | Ueumol | 2022-11-27T22:09:28Z | 0 | 0 | null | [
"region:us"
] | null | 2022-11-27T21:19:16Z | Need to use prompt -
Utapristyle |
cmudrc/wave-energy-analysis | cmudrc | 2022-11-27T22:08:42Z | 12 | 1 | tf-keras | [
"tf-keras",
"mechanical-engineering",
"simulation",
"hydrodynamics",
"en",
"dataset:cmudrc/wave-energy",
"license:mit",
"region:us"
] | null | 2022-11-27T04:33:25Z | ---
license: mit
language: en
datasets:
- cmudrc/wave-energy
tags:
- mechanical-engineering
- simulation
- hydrodynamics
--- |
cmudrc/wave-energy-synthesis | cmudrc | 2022-11-27T21:30:39Z | 3 | 1 | tf-keras | [
"tf-keras",
"en",
"dataset:cmudrc/wave-energy",
"license:mit",
"region:us"
] | null | 2022-11-27T04:33:15Z | ---
license: mit
language: en
datasets:
- cmudrc/wave-energy
---
|
ProGamerGov/Object-Taped-To-Wall-Diffusion-V1 | ProGamerGov | 2022-11-27T21:17:56Z | 0 | 15 | null | [
"stable-diffusion",
"text-to-image",
"dataset:ProGamerGov/StableDiffusion-v1-5-Regularization-Images",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2022-11-24T01:24:34Z | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
datasets:
- ProGamerGov/StableDiffusion-v1-5-Regularization-Images
---
**Object-Taped-To-Wall-Diffusion**
This fine-tuned Stable Diffusion v1.5 model was trained for 2000 iterations with a batch size of 4, on a selection of photos of things taped to a wall. Training was performed using [ShivamShrirao/diffusers](https://github.com/ShivamShrirao/diffusers) with full precision, prior-preservation loss, the train-text-encoder feature, and the new [1.5 MSE VAE from Stability AI](https://huggingface.co/stabilityai/sd-vae-ft-mse). A total of 2100 regularization / class images were used from [here](https://huggingface.co/datasets/ProGamerGov/StableDiffusion-v1-5-Regularization-Images). Regularization images were generated using the prompt "artwork style" with 50 DPM++ 2S a Karras steps and a CFG of 7, using the MSE VAE. A negative prompt of "text" was also used for this dataset.
Use the tokens **ttw style** in your prompts for the effect. Note that the effect also appears to occur at a much weaker strength on prompts that steer the output towards specific artistic styles.
This model will likely not perform well on taping objects that are not traditionally able to be taped to walls.
<div align="center">
<img src="https://huggingface.co/ProGamerGov/Object-Taped-To-Wall-Diffusion-V1/resolve/main/v1_size_512x512_t4x8.png">
</div>
* [Full Image](https://huggingface.co/ProGamerGov/Object-Taped-To-Wall-Diffusion-V1/resolve/main/v1_size_512x512_t4x8.png)
Example images were generated with the v1 2000 iteration model using DPM++ 2S a Karras:
```
ttw style, <object> taped to wall
```
This model was inspired by the 2019 art piece [*Comedian* by Italian artist Maurizio Cattelan](https://en.wikipedia.org/wiki/Comedian_(artwork\)), where a banana was duct taped to a wall.
|
Davimartins/Farias123 | Davimartins | 2022-11-27T20:50:51Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-11-27T20:50:50Z | ---
license: bigscience-openrail-m
---
|
Subsets and Splits