modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
tner/bertweet-base-tweetner-2020-2021-concat | 6c4af2fa2c137015cf53fb02533cfdd79a6aa38b | 2022-07-09T21:19:27.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/bertweet-base-tweetner-2020-2021-concat | 3 | null | transformers | 22,700 | Entry not found |
tner/bertweet-base-tweetner-2020-2021-continuous | 183a3ebea8060d2e3774a20971b87c02c0b793d0 | 2022-07-11T22:19:54.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/bertweet-base-tweetner-2020-2021-continuous | 3 | null | transformers | 22,701 | Entry not found |
huggingtweets/06melihgokcek | 3fb3f4e45bce3940bb268e12de2091df7e14d4f3 | 2022-07-10T03:44:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/06melihgokcek | 3 | null | transformers | 22,702 | ---
language: en
thumbnail: http://www.huggingtweets.com/06melihgokcek/1657424657914/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1419298461/Baskan_0383_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">İbrahim Melih Gökçek</div>
<div style="text-align: center; font-size: 14px;">@06melihgokcek</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from İbrahim Melih Gökçek.
| Data | İbrahim Melih Gökçek |
| --- | --- |
| Tweets downloaded | 3237 |
| Retweets | 457 |
| Short tweets | 307 |
| Tweets kept | 2473 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/b48osocr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @06melihgokcek's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3d3h0tqk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3d3h0tqk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/06melihgokcek')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
freedomking/ernie-ctm-base | 1119e9d725d617eca648f15c9db5ed2123d72c02 | 2022-07-10T08:04:18.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | freedomking | null | freedomking/ernie-ctm-base | 3 | null | transformers | 22,703 | ## Introduction
### ERNIE-CTM(ERNIE for Chinese Text Mining)
ERNIE-CTM是适用于中文文本挖掘任务的预训练语言模型,拥有更全面的汉字字表集合,更优的中文文本挖掘任务表现,与PaddleNLP深度结合,提供更加便捷的应用实践。
### ERNIE-CTM特点
* 全面的中文汉字字表扩充
ERNIE-CTM的字符集包含2万+汉字,以及中文常用符号(常用标点、汉语拼音、编号)、部分外语符号(假名、单位)等,大幅减少中文解析挖掘任务中UNK(未识别字符)引发的标注问题。同时,ERNIE-CTM使用了embedding分解,可以更加灵活地扩充应用字表。
* 更加适配中文文本挖掘任务
ERNIE-CTM中在每个表示后面添加了全局信息,在序列特征上叠加了全局的信息,使得在文本挖掘任务上有更加强力的表现。
* 支持多种特征训练的模型结构
ERNIE-CTM的模型结构中,支持多种特征训练,用户可按照自己的需求任意添加任务及对应特征训练模型,而无需考虑任务之间的冲突所造成的灾难性遗忘。
More detail:
https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/text_to_knowledge/ernie-ctm |
jonatasgrosman/exp_w2v2t_de_unispeech_s62 | a86ddd4bc5b10e5b1bfe02816b715df58067eac5 | 2022-07-10T10:27:30.000Z | [
"pytorch",
"unispeech",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_de_unispeech_s62 | 3 | null | transformers | 22,704 | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_de_unispeech_s62
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_de_unispeech-ml_s257 | d85a098f6e7de4770aebaf55300a0b182bc85917 | 2022-07-10T11:23:57.000Z | [
"pytorch",
"unispeech",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_de_unispeech-ml_s257 | 3 | null | transformers | 22,705 | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_de_unispeech-ml_s257
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
aws-ai/dse-distilbert-base | 693ab0e22dc55bb5d874f8bb5fee9cf96cb385d8 | 2022-07-10T19:30:47.000Z | [
"pytorch",
"distilbert",
"transformers"
] | null | false | aws-ai | null | aws-ai/dse-distilbert-base | 3 | null | transformers | 22,706 | Entry not found |
tner/bertweet-large-tweetner-2021 | ed88a7b40dee511204af2a5ae43e5a7b19e5350a | 2022-07-10T23:35:59.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/bertweet-large-tweetner-2021 | 3 | null | transformers | 22,707 | Entry not found |
tner/bertweet-large-tweetner-2020-2021-concat | c7ea4500bfb934b11f476d61fabec1f806a30389 | 2022-07-10T23:40:22.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/bertweet-large-tweetner-2020-2021-concat | 3 | null | transformers | 22,708 | Entry not found |
tner/bertweet-large-tweetner-2020-2021-continuous | 743ab58636b3392864131174decc4b0784e40991 | 2022-07-12T14:04:23.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/bertweet-large-tweetner-2020-2021-continuous | 3 | null | transformers | 22,709 | Entry not found |
tner/roberta-base-tweetner-random | d19995989585bd5aef0565684dc6a2d99423ae27 | 2022-07-11T00:42:21.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/roberta-base-tweetner-random | 3 | null | transformers | 22,710 | Entry not found |
tner/bert-base-tweetner-random | c3454cb2a2411c6cd69081b87263eef9e804b1c6 | 2022-07-11T10:46:53.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/bert-base-tweetner-random | 3 | null | transformers | 22,711 | Entry not found |
tner/bert-large-tweetner-random | c751c9020aab5d4d0a19b673b8f4e8ba9735941e | 2022-07-11T11:24:07.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/bert-large-tweetner-random | 3 | null | transformers | 22,712 | Entry not found |
jonatasgrosman/exp_w2v2t_es_unispeech-ml_s474 | 4e8b4414542f07abdf8f24f4509ded87599e2c76 | 2022-07-11T11:58:24.000Z | [
"pytorch",
"unispeech",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_es_unispeech-ml_s474 | 3 | null | transformers | 22,713 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_es_unispeech-ml_s474
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
tner/bert-base-tweetner-2021 | 9d2adb1c6142a05535fb96894c03be6afb2fac49 | 2022-07-11T22:18:29.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/bert-base-tweetner-2021 | 3 | null | transformers | 22,714 | Entry not found |
tner/bert-base-tweetner-2020-2021-concat | f92ec913df8e24b271e9f3dab7fc91757d309af5 | 2022-07-11T22:19:54.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/bert-base-tweetner-2020-2021-concat | 3 | null | transformers | 22,715 | Entry not found |
jonatasgrosman/exp_w2v2t_es_r-wav2vec2_s809 | b98696228157e00a52fe2ef676af79f2453f0ec9 | 2022-07-11T16:26:53.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_es_r-wav2vec2_s809 | 3 | null | transformers | 22,716 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_es_r-wav2vec2_s809
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_es_r-wav2vec2_s227 | f0daf989047e48c27b19a9695d9a234fd6cd141a | 2022-07-11T16:34:37.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_es_r-wav2vec2_s227 | 3 | null | transformers | 22,717 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_es_r-wav2vec2_s227
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
tner/twitter-roberta-base-dec2021-tweetner-random | a404d7f11373654b95a716b67065287ca6b05e0e | 2022-07-11T16:46:32.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/twitter-roberta-base-dec2021-tweetner-random | 3 | null | transformers | 22,718 | Entry not found |
jonatasgrosman/exp_w2v2t_es_vp-it_s320 | 5de0d6e39d36d7b67e8b97f9708a4b34dba891d1 | 2022-07-11T16:48:28.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_es_vp-it_s320 | 3 | null | transformers | 22,719 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_es_vp-it_s320
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
tner/bert-large-tweetner-2021 | 84e3a47f802749cc7d41e3fa13464f15c87d3bbb | 2022-07-12T09:26:24.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/bert-large-tweetner-2021 | 3 | null | transformers | 22,720 | Entry not found |
tner/bert-large-tweetner-2020-2021-concat | 4894c9c252d2080da9555ffd55c6734797b7ace9 | 2022-07-12T09:30:22.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/bert-large-tweetner-2020-2021-concat | 3 | null | transformers | 22,721 | Entry not found |
JasonXu/lab4 | dc36008d1753bb4db4b54fc4e53518d4c9096f38 | 2022-07-12T10:09:59.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | JasonXu | null | JasonXu/lab4 | 3 | null | transformers | 22,722 | Entry not found |
Hamzaaa/wav2vec2-base-960h-finetuned-trained-Crema_only | 97cb7ab9592c9476de74a41a75d9f0bf7643501b | 2022-07-12T11:29:03.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers"
] | audio-classification | false | Hamzaaa | null | Hamzaaa/wav2vec2-base-960h-finetuned-trained-Crema_only | 3 | null | transformers | 22,723 | Entry not found |
Team-PIXEL/pixel-base-finetuned-pos-ud-english-ewt | 53b7d1d7888b8ab99c109207915625674697c7c7 | 2022-07-13T00:49:16.000Z | [
"pytorch",
"pixel",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Team-PIXEL | null | Team-PIXEL/pixel-base-finetuned-pos-ud-english-ewt | 3 | null | transformers | 22,724 | Entry not found |
Hamzaaa/wav2vec2-base-960h-finetuned-trained-greek | eee025c058322f3594418e3f09a2ed4840f0bd61 | 2022-07-13T09:43:11.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers"
] | audio-classification | false | Hamzaaa | null | Hamzaaa/wav2vec2-base-960h-finetuned-trained-greek | 3 | null | transformers | 22,725 | Entry not found |
KeLiu/QETRA_Java | 2fff2f6feff6acb498f904ba74b716177f6ca634 | 2022-07-13T13:32:45.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | KeLiu | null | KeLiu/QETRA_Java | 3 | null | transformers | 22,726 | Entry not found |
KeLiu/QETRA_CSharp | 2c18c20ac9af2cc5fa1150fa115e82e9d9ea0912 | 2022-07-13T13:37:31.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | KeLiu | null | KeLiu/QETRA_CSharp | 3 | null | transformers | 22,727 | Entry not found |
RJ3vans/ElectraSSCCVspanTagger | 0b32f8746519367add17a36d3aba939cc28f3470 | 2022-07-13T23:34:28.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | RJ3vans | null | RJ3vans/ElectraSSCCVspanTagger | 3 | null | transformers | 22,728 | Entry not found |
ghadeermobasher/OriginalBiomedNLP-PubMedBERT-base-uncased-abstract-BioRED-CD-128-32-30 | d6a4a32a649171257e9d5c4c4e6610a9901a0c4d | 2022-07-13T17:22:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/OriginalBiomedNLP-PubMedBERT-base-uncased-abstract-BioRED-CD-128-32-30 | 3 | null | transformers | 22,729 | Entry not found |
ghadeermobasher/Modified-BiomedNLP-PubMedBERT-base-uncased-abstract-BioRED-CD-128-32-30 | 24f6e9e365e3a965af3aadea87eab6929e0d631f | 2022-07-13T17:23:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/Modified-BiomedNLP-PubMedBERT-base-uncased-abstract-BioRED-CD-128-32-30 | 3 | null | transformers | 22,730 | Entry not found |
ghadeermobasher/Originalbiobert-v1.1-BioRED-CD-128-32-30 | 83b03b89e698ca3b3de02d65e6485d0f89d754e9 | 2022-07-13T17:47:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/Originalbiobert-v1.1-BioRED-CD-128-32-30 | 3 | null | transformers | 22,731 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: Originalbiobert-v1.1-BioRED-CD-128-32-30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Originalbiobert-v1.1-BioRED-CD-128-32-30
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Precision: 0.9994
- Recall: 1.0
- F1: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.10.3
|
ghadeermobasher/OriginalBiomedNLP-bluebert_pubmed_uncased_L-12_H-768_A-12-BioRED_Dis-256-16-5 | 3f51d473871e9284181e3faa738f1d39948804de | 2022-07-13T20:25:19.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/OriginalBiomedNLP-bluebert_pubmed_uncased_L-12_H-768_A-12-BioRED_Dis-256-16-5 | 3 | null | transformers | 22,732 | Entry not found |
ghadeermobasher/Modifiedbluebert_pubmed_uncased_L-12_H-768_A-12-BioRED-Dis-256-16-5 | 1f126c9421c2a27c78dcb67a85b2535588203cb5 | 2022-07-13T20:25:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/Modifiedbluebert_pubmed_uncased_L-12_H-768_A-12-BioRED-Dis-256-16-5 | 3 | null | transformers | 22,733 | Entry not found |
Hamzaaa/wav2vec2-base-finetuned-Saveee | 23415d57f6afe07ffd58b4d71c0014bf32fa6fca | 2022-07-15T22:09:58.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers"
] | audio-classification | false | Hamzaaa | null | Hamzaaa/wav2vec2-base-finetuned-Saveee | 3 | null | transformers | 22,734 | Entry not found |
Lyla/dummy-model | cd45a21359a8c7d9dc990f7197a54ca3496e427b | 2022-07-17T05:01:48.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Lyla | null | Lyla/dummy-model | 3 | null | transformers | 22,735 | Entry not found |
Aktsvigun/bart-base_abssum_scisummnet_3982742 | 191761613c46d18db07e9a8bd6825d207baf30b3 | 2022-07-19T05:58:01.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_abssum_scisummnet_3982742 | 3 | null | transformers | 22,736 | Entry not found |
Aktsvigun/bart-base_abssum_wikihow_all_6585777 | 7a9c6f39dbdeb2414f98325ac8aa917c7347b4e6 | 2022-07-19T06:24:39.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_abssum_wikihow_all_6585777 | 3 | null | transformers | 22,737 | Entry not found |
Aktsvigun/bart-base_abssum_wikihow_all_23419 | 637c37d4518e14e0cefd4b51ddf9a0b05b17ee6e | 2022-07-19T06:28:00.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_abssum_wikihow_all_23419 | 3 | null | transformers | 22,738 | Entry not found |
Aktsvigun/bart-base_abssum_scisummnet_2470973 | 4cd8dd3e50c6e3ed6e09a743f094204d933335ec | 2022-07-19T06:51:21.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_abssum_scisummnet_2470973 | 3 | null | transformers | 22,739 | Entry not found |
Aktsvigun/bart-base_abssum_scisummnet_6864530 | aa869026e56bd1319b1a4b462ece87a3c5246cbe | 2022-07-19T07:51:23.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_abssum_scisummnet_6864530 | 3 | null | transformers | 22,740 | Entry not found |
Siyong/MT_RN_LM | 4a9352389350351e3b002a21d95a3f79f1d37000 | 2022-07-20T03:25:42.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Siyong | null | Siyong/MT_RN_LM | 3 | null | transformers | 22,741 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: run1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# run1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6666
- Wer: 0.6375
- Cer: 0.3170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 1.0564 | 2.36 | 2000 | 2.3456 | 0.9628 | 0.5549 |
| 0.5071 | 4.73 | 4000 | 2.0652 | 0.9071 | 0.5115 |
| 0.3952 | 7.09 | 6000 | 2.3649 | 0.9108 | 0.4628 |
| 0.3367 | 9.46 | 8000 | 1.7615 | 0.8253 | 0.4348 |
| 0.2765 | 11.82 | 10000 | 1.6151 | 0.7937 | 0.4087 |
| 0.2493 | 14.18 | 12000 | 1.4976 | 0.7881 | 0.3905 |
| 0.2318 | 16.55 | 14000 | 1.6731 | 0.8160 | 0.3925 |
| 0.2074 | 18.91 | 16000 | 1.5822 | 0.7658 | 0.3913 |
| 0.1825 | 21.28 | 18000 | 1.5442 | 0.7361 | 0.3704 |
| 0.1824 | 23.64 | 20000 | 1.5988 | 0.7621 | 0.3711 |
| 0.1699 | 26.0 | 22000 | 1.4261 | 0.7119 | 0.3490 |
| 0.158 | 28.37 | 24000 | 1.7482 | 0.7658 | 0.3648 |
| 0.1385 | 30.73 | 26000 | 1.4103 | 0.6784 | 0.3348 |
| 0.1199 | 33.1 | 28000 | 1.5214 | 0.6636 | 0.3273 |
| 0.116 | 35.46 | 30000 | 1.4288 | 0.7212 | 0.3486 |
| 0.1071 | 37.83 | 32000 | 1.5344 | 0.7138 | 0.3411 |
| 0.1007 | 40.19 | 34000 | 1.4501 | 0.6691 | 0.3237 |
| 0.0943 | 42.55 | 36000 | 1.5367 | 0.6859 | 0.3265 |
| 0.0844 | 44.92 | 38000 | 1.5321 | 0.6599 | 0.3273 |
| 0.0762 | 47.28 | 40000 | 1.6721 | 0.6264 | 0.3142 |
| 0.0778 | 49.65 | 42000 | 1.6666 | 0.6375 | 0.3170 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.0+cu113
- Datasets 2.0.0
- Tokenizers 0.12.1
|
Aktsvigun/bart-base_abssum_wikihow_all_9467153 | 3047235cb8554629a051dff3bc5c74a76705be5f | 2022-07-20T08:10:10.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_abssum_wikihow_all_9467153 | 3 | null | transformers | 22,742 | Entry not found |
PSW/bart-base-convsumm-xsum-cnndm-bs0.25 | eae37d20818642f95df4ff4b2bacc08a91b8bacf | 2022-07-21T01:10:02.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/bart-base-convsumm-xsum-cnndm-bs0.25 | 3 | null | transformers | 22,743 | Entry not found |
Aktsvigun/bart-base_abssum_wikihow_all_42 | 266a2f1fa8fc3f251330b664444943f1d4c6dbf8 | 2022-07-21T01:54:42.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_abssum_wikihow_all_42 | 3 | null | transformers | 22,744 | Entry not found |
PSW/bart-base-pretrained-on-xsum-cnndm-bs0.25 | c7707ac4c4fababe2f258d05f8ecdf550ca4c994 | 2022-07-21T02:08:35.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/bart-base-pretrained-on-xsum-cnndm-bs0.25 | 3 | null | transformers | 22,745 | Entry not found |
trevorj/BART_reddit_media_lifestyle_sports | 5e69238089d20b2cec13a812db8cf85a1d26e158 | 2022-07-21T15:24:59.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | trevorj | null | trevorj/BART_reddit_media_lifestyle_sports | 3 | null | transformers | 22,746 | Entry not found |
Aktsvigun/bart-base_abssum_wikihow_all_3878022 | ebe1dfc0da79b1686532ccff37b628827f13e08e | 2022-07-21T11:55:57.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_abssum_wikihow_all_3878022 | 3 | null | transformers | 22,747 | Entry not found |
Aktsvigun/bart-base_abssum_wikihow_all_705525 | b3c1e5ac81b58cde40f04c0cdd358d8be50c4634 | 2022-07-21T15:00:09.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_abssum_wikihow_all_705525 | 3 | null | transformers | 22,748 | Entry not found |
Aktsvigun/bart-base_abssum_wikihow_all_5537116 | a1f897953cd18816e95ced6114b08c56e46115ef | 2022-07-22T02:12:13.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_abssum_wikihow_all_5537116 | 3 | null | transformers | 22,749 | Entry not found |
relbert/relbert-roberta-large-semeval2012-mask-prompt-d-nce | 299489979184aa59878dd126e299dc0c32ae0e03 | 2022-07-25T00:06:45.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-mask-prompt-d-nce | 3 | null | transformers | 22,750 | Entry not found |
relbert/relbert-roberta-large-semeval2012-mask-prompt-a-nce | adb6ef4718fc2e5dcf93e9291e8155699478ec82 | 2022-07-24T21:21:12.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-mask-prompt-a-nce | 3 | null | transformers | 22,751 | Entry not found |
relbert/relbert-roberta-large-semeval2012-mask-prompt-b-nce | ba35b178dadceb5266c4ccdd508bb1c1b0f904af | 2022-07-24T22:16:20.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-mask-prompt-b-nce | 3 | null | transformers | 22,752 | Entry not found |
relbert/relbert-roberta-large-semeval2012-mask-prompt-c-nce | b793652a1ffb8017e523884b647bef2b4c584495 | 2022-07-24T23:10:50.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-mask-prompt-c-nce | 3 | null | transformers | 22,753 | Entry not found |
relbert/relbert-roberta-large-semeval2012-mask-prompt-e-nce | 90295a65ceb5f1854ad5873adc082474a5ff4107 | 2022-07-25T01:02:55.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-mask-prompt-e-nce | 3 | null | transformers | 22,754 | Entry not found |
relbert/relbert-roberta-large-semeval2012-average-prompt-d-nce | 84afd0b6814b58d11a712f19b9a09fe6dbda2c36 | 2022-07-25T00:25:45.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-average-prompt-d-nce | 3 | null | transformers | 22,755 | Entry not found |
relbert/relbert-roberta-large-semeval2012-average-prompt-a-nce | c249af31247e4785bbf281376828381942e323ba | 2022-07-24T21:39:29.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-average-prompt-a-nce | 3 | null | transformers | 22,756 | Entry not found |
relbert/relbert-roberta-large-semeval2012-average-prompt-b-nce | befc0ec3c05aca23e625ec48747a70975316c6b2 | 2022-07-24T22:34:32.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-average-prompt-b-nce | 3 | null | transformers | 22,757 | Entry not found |
relbert/relbert-roberta-large-semeval2012-average-prompt-c-nce | 776c7d7f077ce26ea2da3c931a497ffac073ec35 | 2022-07-24T23:30:24.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-average-prompt-c-nce | 3 | null | transformers | 22,758 | Entry not found |
relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-d-nce | 30c2236082d75726713ad5710954f30e756cfb33 | 2022-07-25T00:44:25.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-d-nce | 3 | null | transformers | 22,759 | Entry not found |
relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-a-nce | 79138e4ff24bef7d8ca9454b07d9e140e3697aa9 | 2022-07-24T21:57:49.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-a-nce | 3 | null | transformers | 22,760 | Entry not found |
relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-b-nce | 58904a386c7211fa7a53d72493023f17eef4e2c2 | 2022-07-24T22:52:53.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-b-nce | 3 | null | transformers | 22,761 | Entry not found |
relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-c-nce | 53128ae75bd40c425a2c265124432c0fdabfde77 | 2022-07-24T23:48:09.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-c-nce | 3 | null | transformers | 22,762 | Entry not found |
relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-e-nce | 7318f95226d68c8c60e5ff0d9f81821439354aa5 | 2022-07-25T01:39:52.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-e-nce | 3 | null | transformers | 22,763 | Entry not found |
ManqingLiu/distilbert-base-uncased-distilled-clinc | 9e75fc5a79bd25fdf8dfd71d2536b0575262a818 | 2022-07-22T18:06:51.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ManqingLiu | null | ManqingLiu/distilbert-base-uncased-distilled-clinc | 3 | null | transformers | 22,764 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9390322580645162
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0990
- Accuracy: 0.9390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0901 | 1.0 | 318 | 0.6293 | 0.7026 |
| 0.4796 | 2.0 | 636 | 0.2666 | 0.8661 |
| 0.2386 | 3.0 | 954 | 0.1553 | 0.9148 |
| 0.1591 | 4.0 | 1272 | 0.1238 | 0.9271 |
| 0.1309 | 5.0 | 1590 | 0.1121 | 0.9339 |
| 0.118 | 6.0 | 1908 | 0.1065 | 0.9371 |
| 0.11 | 7.0 | 2226 | 0.1033 | 0.9394 |
| 0.1057 | 8.0 | 2544 | 0.1002 | 0.9377 |
| 0.1032 | 9.0 | 2862 | 0.0995 | 0.9384 |
| 0.1014 | 10.0 | 3180 | 0.0990 | 0.9390 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
relbert/relbert-roberta-large-semeval2012-average-prompt-e-triplet | d28c229b6f4ff5fee8c72a1a5e7839fc32732739 | 2022-07-24T20:40:34.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-average-prompt-e-triplet | 3 | null | transformers | 22,765 | Entry not found |
ilana/tiny-bert-sst2-distilled | c0613a089954ac0a9903a1e8acd83943fdeb1cff | 2022-07-23T19:23:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ilana | null | ilana/tiny-bert-sst2-distilled | 3 | null | transformers | 22,766 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: tiny-bert-sst2-distilled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-bert-sst2-distilled
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the glue dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.0017
- eval_accuracy: 0.7477
- eval_runtime: 0.3985
- eval_samples_per_second: 2188.296
- eval_steps_per_second: 17.567
- epoch: 1.0
- step: 527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.708803333901887e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
techsword/wav2vec-fame-frisian | 06cda281ebbd3275dd5acac9c7777e1e8424b8fe | 2022-07-23T21:07:31.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | techsword | null | techsword/wav2vec-fame-frisian | 3 | null | transformers | 22,767 | Entry not found |
PanNorek/distilroberta-base-disaster-tweets | e59cbc46ab68eb10efa1ee9726e784cd2fc77a57 | 2022-07-23T21:58:08.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | PanNorek | null | PanNorek/distilroberta-base-disaster-tweets | 3 | null | transformers | 22,768 | Entry not found |
zluvolyote/DEREXP_home | 1aa6cbaa57e1f9a689c85f0f657d7bf931eacb37 | 2022-07-23T22:34:33.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | zluvolyote | null | zluvolyote/DEREXP_home | 3 | null | transformers | 22,769 | Entry not found |
schnell/test | 2de951481085575c4999d192fc10bd763abc38ef | 2022-07-24T06:06:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | schnell | null | schnell/test | 3 | null | transformers | 22,770 | ---
tags:
- generated_from_trainer
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
domenicrosati/deberta-v3-large-finetuned-synthetic-generated-only | 43e200a333e62fb8a15043d242eb45cc7af8b093 | 2022-07-24T22:50:35.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | domenicrosati | null | domenicrosati/deberta-v3-large-finetuned-synthetic-generated-only | 3 | null | transformers | 22,771 | ---
license: mit
tags:
- text-classification
- generated_from_trainer
metrics:
- f1
- precision
- recall
model-index:
- name: deberta-v3-large-finetuned-synthetic-generated-only
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-finetuned-synthetic-generated-only
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0094
- F1: 0.9839
- Precision: 0.9849
- Recall: 0.9828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:---------:|:------:|
| 0.009 | 1.0 | 10387 | 0.0104 | 0.9722 | 0.9919 | 0.9533 |
| 0.0013 | 2.0 | 20774 | 0.0067 | 0.9825 | 0.9844 | 0.9805 |
| 0.0006 | 3.0 | 31161 | 0.0077 | 0.9843 | 0.9902 | 0.9786 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
zluvolyote/DEREXP_Regression_6k | 70267d9292f28a8ddea5b6d1d2785a8b490a8f07 | 2022-07-26T00:51:31.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | zluvolyote | null | zluvolyote/DEREXP_Regression_6k | 3 | null | transformers | 22,772 | Entry not found |
wuhuaguo/distilbert-base-uncased-finetuned-cola | 9a736ac9e38a23fed8378d1fda0a7bae7a00a7b7 | 2022-07-25T05:29:23.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | wuhuaguo | null | wuhuaguo/distilbert-base-uncased-finetuned-cola | 3 | null | transformers | 22,773 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5489250601752835
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8115
- Matthews Correlation: 0.5489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5223 | 1.0 | 535 | 0.5400 | 0.4165 |
| 0.349 | 2.0 | 1070 | 0.5125 | 0.4738 |
| 0.2392 | 3.0 | 1605 | 0.5283 | 0.5411 |
| 0.1791 | 4.0 | 2140 | 0.7506 | 0.5301 |
| 0.127 | 5.0 | 2675 | 0.8115 | 0.5489 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
hecsi/distilbert-base-uncased-finetuned-emotion | c2c1fbac3726227f8bcfb67af7e2448f8db5a6e5 | 2022-07-25T06:09:59.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | hecsi | null | hecsi/distilbert-base-uncased-finetuned-emotion | 3 | null | transformers | 22,774 | Entry not found |
emilylearning/cond_ft_none_on_reddit__prcnt_100__test_run_False__bert-base-uncased | f49f28aaf0d73800d687a4c3984481436b50c857 | 2022-07-25T18:49:04.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | emilylearning | null | emilylearning/cond_ft_none_on_reddit__prcnt_100__test_run_False__bert-base-uncased | 3 | null | transformers | 22,775 | Entry not found |
jaeyeon/korean-aihub-learning-math-1-test | fc89cbcd6c93fc449e3c75c5d1c13874f1662a5d | 2022-07-26T03:41:24.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jaeyeon | null | jaeyeon/korean-aihub-learning-math-1-test | 3 | null | transformers | 22,776 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: korean-aihub-learning-math-1-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# korean-aihub-learning-math-1-test
This model is a fine-tuned version of [kresnik/wav2vec2-large-xlsr-korean](https://huggingface.co/kresnik/wav2vec2-large-xlsr-korean) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2537
- Wer: 0.4765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 35 | 29.8031 | 1.0 |
| No log | 2.0 | 70 | 5.7158 | 1.0 |
| 19.8789 | 3.0 | 105 | 4.5005 | 1.0 |
| 19.8789 | 4.0 | 140 | 4.3677 | 0.9984 |
| 19.8789 | 5.0 | 175 | 3.8013 | 0.9882 |
| 3.9785 | 6.0 | 210 | 2.4132 | 0.8730 |
| 3.9785 | 7.0 | 245 | 1.5867 | 0.7045 |
| 3.9785 | 8.0 | 280 | 1.3179 | 0.6082 |
| 1.2266 | 9.0 | 315 | 1.2431 | 0.6066 |
| 1.2266 | 10.0 | 350 | 1.1791 | 0.5384 |
| 1.2266 | 11.0 | 385 | 1.0994 | 0.5298 |
| 0.3916 | 12.0 | 420 | 1.1552 | 0.5196 |
| 0.3916 | 13.0 | 455 | 1.1495 | 0.5486 |
| 0.3916 | 14.0 | 490 | 1.1340 | 0.5290 |
| 0.2488 | 15.0 | 525 | 1.2208 | 0.5525 |
| 0.2488 | 16.0 | 560 | 1.1682 | 0.5024 |
| 0.2488 | 17.0 | 595 | 1.1479 | 0.5008 |
| 0.1907 | 18.0 | 630 | 1.1735 | 0.4882 |
| 0.1907 | 19.0 | 665 | 1.2302 | 0.4914 |
| 0.1461 | 20.0 | 700 | 1.2497 | 0.4890 |
| 0.1461 | 21.0 | 735 | 1.2434 | 0.4914 |
| 0.1461 | 22.0 | 770 | 1.2031 | 0.5031 |
| 0.1147 | 23.0 | 805 | 1.2451 | 0.4976 |
| 0.1147 | 24.0 | 840 | 1.2746 | 0.4937 |
| 0.1147 | 25.0 | 875 | 1.2405 | 0.4828 |
| 0.0892 | 26.0 | 910 | 1.2228 | 0.4929 |
| 0.0892 | 27.0 | 945 | 1.2642 | 0.4898 |
| 0.0892 | 28.0 | 980 | 1.2586 | 0.4843 |
| 0.0709 | 29.0 | 1015 | 1.2518 | 0.4788 |
| 0.0709 | 30.0 | 1050 | 1.2537 | 0.4765 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Ankhitan/1000-model5 | afce4c103fbe130cc7f4b929c4afa966311f9dec | 2022-07-25T15:38:09.000Z | [
"pytorch",
"segformer",
"transformers"
] | null | false | Ankhitan | null | Ankhitan/1000-model5 | 3 | null | transformers | 22,777 | Entry not found |
jonatasgrosman/exp_w2v2r_en_xls-r_accent_us-0_england-10_s35 | a5a962a123a4f09f462734b7496782a548df3749 | 2022-07-25T15:31:23.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_en_xls-r_accent_us-0_england-10_s35 | 3 | null | transformers | 22,778 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_xls-r_accent_us-0_england-10_s35
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
emilylearning/cond_ft_subreddit_on_reddit__prcnt_100__test_run_False__bert-base-uncased | 983a47046b130eccf4cf93cc3cebd22b328ed37f | 2022-07-26T01:58:14.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | emilylearning | null | emilylearning/cond_ft_subreddit_on_reddit__prcnt_100__test_run_False__bert-base-uncased | 3 | null | transformers | 22,779 | Entry not found |
enoriega/rule_learning_1mm_many_negatives_spanpred_avg_corrected | e8f7dd6394d526de26ee147652cae3ff8668e52c | 2022-07-26T04:16:46.000Z | [
"pytorch",
"tensorboard",
"bert",
"transformers"
] | null | false | enoriega | null | enoriega/rule_learning_1mm_many_negatives_spanpred_avg_corrected | 3 | null | transformers | 22,780 | Entry not found |
BigSalmon/InformalToFormalLincoln57Paraphrase | 28c0fb97ab9d7cd7c09a7aa886024fb868017fef | 2022-07-26T01:56:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln57Paraphrase | 3 | null | transformers | 22,781 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln57Paraphrase")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln57Paraphrase")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
- penny has practically no value
- should be taken out of circulation
- just as other coins have been in us history
- lost use
- value not enough
- to make environmental consequences worthy
text: all but valueless, the penny should be retired. as with other coins in american history, it has become defunct. too minute to warrant the environmental consequences of its production, it has outlived its usefulness.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
```
```
first: ( was complicit in / was involved in ).
antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ).
***
first: ( have no qualms about / see no issue with ).
antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ).
***
first: ( do not see eye to eye / disagree often ).
antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ).
***
first:
```
```
stiff with competition, law school {A} is the launching pad for countless careers, {B} is a crowded field, {C} ranks among the most sought-after professional degrees, {D} is a professional proving ground.
***
languishing in viewership, saturday night live {A} is due for a creative renaissance, {B} is no longer a ratings juggernaut, {C} has been eclipsed by its imitators, {C} can still find its mojo.
***
dubbed the "manhattan of the south," atlanta {A} is a bustling metropolis, {B} is known for its vibrant downtown, {C} is a city of rich history, {D} is the pride of georgia.
***
embattled by scandal, harvard {A} is feeling the heat, {B} cannot escape the media glare, {C} is facing its most intense scrutiny yet, {D} is in the spotlight for all the wrong reasons.
``` |
jinwooChoi/SKKU_AP_SA_HJW_KBT1 | df6fa2e0dd22d53d7abfddbe1871cce75d4a2c67 | 2022-07-26T06:45:57.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | jinwooChoi | null | jinwooChoi/SKKU_AP_SA_HJW_KBT1 | 3 | null | transformers | 22,782 | Entry not found |
jinwooChoi/SKKU_AP_SA_HJW_SMALL1 | 6ed123439e3d6cdc8b455f1f16468ed1eeffbf5d | 2022-07-26T07:09:26.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | jinwooChoi | null | jinwooChoi/SKKU_AP_SA_HJW_SMALL1 | 3 | null | transformers | 22,783 | Entry not found |
SummerChiam/rust_image_classification_5 | 7d60c9b9a0ae75931c2b3b7c610944af151c3d07 | 2022-07-26T15:16:23.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | SummerChiam | null | SummerChiam/rust_image_classification_5 | 3 | null | transformers | 22,784 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rust_image_classification_5
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9392405152320862
---
# rust_image_classification_5
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### nonrust

#### rust
 |
vijayrag/distilbert-base-uncased-finetuned-emotion | 18fb9275408dab8fc03f230f6aa492af02745890 | 2022-07-26T19:40:57.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | vijayrag | null | vijayrag/distilbert-base-uncased-finetuned-emotion | 3 | null | transformers | 22,785 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.9273204837245832
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2178
- Accuracy: 0.9275
- F1: 0.9273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8381 | 1.0 | 250 | 0.3130 | 0.9075 | 0.9054 |
| 0.2443 | 2.0 | 500 | 0.2178 | 0.9275 | 0.9273 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
helliun/article_sent_pol | 5733cdad3cf05fc2aa6cad37392d08c03a302166 | 2022-07-26T21:53:10.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | helliun | null | helliun/article_sent_pol | 3 | null | transformers | 22,786 | Entry not found |
huggingtweets/lookinmyeyesboy-mcstoryfeed-mono93646057 | 34e3e62c1decedaff76fb2d45500e2886ac729b0 | 2022-07-27T08:10:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/lookinmyeyesboy-mcstoryfeed-mono93646057 | 3 | null | transformers | 22,787 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1234927574809182209/TTjRcchM_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1302461614478811137/J8gENyLO_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1248778001220882432/yDL7saMY_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">MCStoryBot & Look Into My Eyes Boy & 𝐓𝐡𝐞 𝐌𝐞𝐠𝐚𝐥𝐢𝐭𝐡</div>
<div style="text-align: center; font-size: 14px;">@lookinmyeyesboy-mcstoryfeed-mono93646057</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from MCStoryBot & Look Into My Eyes Boy & 𝐓𝐡𝐞 𝐌𝐞𝐠𝐚𝐥𝐢𝐭𝐡.
| Data | MCStoryBot | Look Into My Eyes Boy | 𝐓𝐡𝐞 𝐌𝐞𝐠𝐚𝐥𝐢𝐭𝐡 |
| --- | --- | --- | --- |
| Tweets downloaded | 3250 | 3244 | 3249 |
| Retweets | 0 | 170 | 39 |
| Short tweets | 0 | 209 | 15 |
| Tweets kept | 3250 | 2865 | 3195 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/futewq5a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lookinmyeyesboy-mcstoryfeed-mono93646057's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2wsp763m) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2wsp763m/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lookinmyeyesboy-mcstoryfeed-mono93646057')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mughalk4/mBERT-Hindi-Mono | a0960ddb9808f910b4a893fbb53b66ae59876f77 | 2022-07-28T06:06:48.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | mughalk4 | null | mughalk4/mBERT-Hindi-Mono | 3 | null | transformers | 22,788 | Entry not found |
olemeyer/zero_shot_issue_classification_bart-large-32-d | 894f627852701c5dbbf6b3f72697dd0426f09e3f | 2022-07-29T08:57:03.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | olemeyer | null | olemeyer/zero_shot_issue_classification_bart-large-32-d | 3 | null | transformers | 22,789 | Entry not found |
Junmai/KR-Data2VecText-v1 | 27f6bb039642c21c798d162dde96fe01ade74c1e | 2022-07-28T09:34:46.000Z | [
"pytorch",
"data2vec-text",
"feature-extraction",
"transformers"
] | feature-extraction | false | Junmai | null | Junmai/KR-Data2VecText-v1 | 3 | null | transformers | 22,790 | Entry not found |
AlexKolosov/my_first_model | 23f43428f4a8ef709ebd932f800e46a23dd91c67 | 2022-07-28T14:14:33.000Z | [
"pytorch",
"resnet",
"image-classification",
"dataset:imagefolder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | AlexKolosov | null | AlexKolosov/my_first_model | 3 | null | transformers | 22,791 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: my_first_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_first_model
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6853
- Accuracy: 0.6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6918 | 1.0 | 23 | 0.6895 | 0.8 |
| 0.7019 | 2.0 | 46 | 0.6859 | 0.6 |
| 0.69 | 3.0 | 69 | 0.6853 | 0.6 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.0+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
maesneako/ES_corlec_DeepESP-gpt2-spanish | 819ba746145811603cbdb3e7103a8a41927725a1 | 2022-07-28T22:04:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | maesneako | null | maesneako/ES_corlec_DeepESP-gpt2-spanish | 3 | null | transformers | 22,792 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: ES_corlec_DeepESP-gpt2-spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ES_corlec_DeepESP-gpt2-spanish
This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0360
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.2471 | 0.4 | 2000 | 4.2111 |
| 4.1503 | 0.79 | 4000 | 4.1438 |
| 4.0749 | 1.19 | 6000 | 4.1077 |
| 4.024 | 1.59 | 8000 | 4.0857 |
| 3.9855 | 1.98 | 10000 | 4.0707 |
| 3.9465 | 2.38 | 12000 | 4.0605 |
| 3.9277 | 2.78 | 14000 | 4.0533 |
| 3.9159 | 3.17 | 16000 | 4.0482 |
| 3.8918 | 3.57 | 18000 | 4.0448 |
| 3.8789 | 3.97 | 20000 | 4.0421 |
| 3.8589 | 4.36 | 22000 | 4.0402 |
| 3.8554 | 4.76 | 24000 | 4.0387 |
| 3.8509 | 5.15 | 26000 | 4.0377 |
| 3.8389 | 5.55 | 28000 | 4.0370 |
| 3.8288 | 5.95 | 30000 | 4.0365 |
| 3.8293 | 6.34 | 32000 | 4.0362 |
| 3.8202 | 6.74 | 34000 | 4.0360 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
LanaKru/wikineural-multilingual-ner-finetuned-ner | 8bc24ba6f67d457283cd5a784d408b621b29e139 | 2022-07-29T09:36:52.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:skript",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | LanaKru | null | LanaKru/wikineural-multilingual-ner-finetuned-ner | 3 | null | transformers | 22,793 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- skript
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: wikineural-multilingual-ner-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: skript
type: skript
config: myscript
split: train
args: myscript
metrics:
- name: Precision
type: precision
value: 0.9007335298553506
- name: Recall
type: recall
value: 0.9301946902654867
- name: F1
type: f1
value: 0.9152270827528559
- name: Accuracy
type: accuracy
value: 0.9653644982020269
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wikineural-multilingual-ner-finetuned-ner
This model is a fine-tuned version of [Babelscape/wikineural-multilingual-ner](https://huggingface.co/Babelscape/wikineural-multilingual-ner) on the skript dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1243
- Precision: 0.9007
- Recall: 0.9302
- F1: 0.9152
- Accuracy: 0.9654
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 298 | 0.1179 | 0.8975 | 0.8981 | 0.8978 | 0.9592 |
| 0.104 | 2.0 | 596 | 0.1161 | 0.9051 | 0.9201 | 0.9126 | 0.9648 |
| 0.104 | 3.0 | 894 | 0.1243 | 0.9007 | 0.9302 | 0.9152 | 0.9654 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
SafiUllahShahid/EnGECmodel | d40d47e74749874460f2c7227230a2c06355bc75 | 2022-07-29T08:12:22.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | SafiUllahShahid | null | SafiUllahShahid/EnGECmodel | 3 | null | transformers | 22,794 | ---
license: apache-2.0
---
|
SummerChiam/pond_image_classification_3 | 0b5e0a2229c7c4b9bc72cc73ed27be851da90a86 | 2022-07-29T07:03:07.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | SummerChiam | null | SummerChiam/pond_image_classification_3 | 3 | null | transformers | 22,795 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pond_image_classification_3
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9974489808082581
---
# pond_image_classification_3
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Algae

#### Boiling

#### BoilingNight

#### Normal

#### NormalCement

#### NormalNight

#### NormalRain
 |
asparius/combined-2 | e53282e3ea5faf394782ab6121d09dd52f27f521 | 2022-07-29T17:20:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | asparius | null | asparius/combined-2 | 3 | null | transformers | 22,796 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: combined-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# combined-2
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7317
- Accuracy: 0.8828
- F1: 0.8866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
domenicrosati/deberta-v3-large-finetuned-dagpap22-only | 8f48df280ce0d40e5397262267c6446f665b7355 | 2022-07-29T20:05:17.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | domenicrosati | null | domenicrosati/deberta-v3-large-finetuned-dagpap22-only | 3 | null | transformers | 22,797 | ---
license: mit
tags:
- text-classification
- generated_from_trainer
metrics:
- f1
- precision
- recall
model-index:
- name: deberta-v3-large-finetuned-dagpap22-only
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-finetuned-dagpap22-only
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0037
- F1: 0.9995
- Precision: 0.9992
- Recall: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:------:|
| 0.1804 | 1.0 | 669 | 0.0222 | 0.9971 | 0.9975 | 0.9967 |
| 0.0402 | 2.0 | 1338 | 0.0069 | 0.9990 | 0.9992 | 0.9989 |
| 0.0046 | 3.0 | 2007 | 0.0037 | 0.9995 | 0.9992 | 0.9997 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
13on/gpt2-wishes | 769284ebaceeb7518f5f7f9fbc35ad94f8c59fe4 | 2022-02-17T16:06:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | 13on | null | 13on/gpt2-wishes | 2 | null | transformers | 22,798 | Entry not found |
1Basco/DialoGPT-small-jake | 839591d80ac1a678eb46623e888599b3ddea18f5 | 2021-09-22T03:32:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | 1Basco | null | 1Basco/DialoGPT-small-jake | 2 | null | transformers | 22,799 | ---
tags:
- conversational
---
#Jake Peralta DialoGPT Model |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.