modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
jhonparra18/distilbert-base-multilingual-cased-cv-studio_name-pooler | d9962ec33f58cf75e7da5d305eb758eccabae817 | 2022-07-26T21:05:44.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | jhonparra18 | null | jhonparra18/distilbert-base-multilingual-cased-cv-studio_name-pooler | 1 | null | transformers | 33,500 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-multilingual-cased-cv-studio_name-pooler
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-cv-studio_name-pooler
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3751
- Accuracy: 0.6846
- F1 Micro: 0.6846
- F1 Macro: 0.4355
- Precision Micro: 0.6846
- Recall Micro: 0.6846
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Micro | F1 Macro | Precision Micro | Recall Micro |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|:--------:|:---------------:|:------------:|
| 1.881 | 1.19 | 1000 | 1.6365 | 0.4948 | 0.4948 | 0.1438 | 0.4948 | 0.4948 |
| 1.2071 | 2.39 | 2000 | 1.2566 | 0.6444 | 0.6444 | 0.3257 | 0.6444 | 0.6444 |
| 0.9068 | 3.58 | 3000 | 1.1112 | 0.6945 | 0.6945 | 0.3995 | 0.6945 | 0.6945 |
| 0.7168 | 4.77 | 4000 | 1.0952 | 0.7053 | 0.7053 | 0.4334 | 0.7053 | 0.7053 |
| 0.5928 | 5.97 | 5000 | 1.1416 | 0.7116 | 0.7116 | 0.4505 | 0.7116 | 0.7116 |
| 0.4373 | 7.16 | 6000 | 1.2468 | 0.7064 | 0.7064 | 0.4499 | 0.7064 | 0.7064 |
| 0.2941 | 8.35 | 7000 | 1.4017 | 0.6997 | 0.6997 | 0.4473 | 0.6997 | 0.6997 |
| 0.2139 | 9.55 | 8000 | 1.5695 | 0.6973 | 0.6973 | 0.4433 | 0.6973 | 0.6973 |
| 0.1437 | 10.74 | 9000 | 1.7535 | 0.6953 | 0.6953 | 0.4387 | 0.6953 | 0.6953 |
| 0.1273 | 11.93 | 10000 | 1.9145 | 0.6937 | 0.6937 | 0.4405 | 0.6937 | 0.6937 |
| 0.1042 | 13.13 | 11000 | 2.0205 | 0.6893 | 0.6893 | 0.4370 | 0.6893 | 0.6893 |
| 0.07 | 14.32 | 12000 | 2.1489 | 0.6881 | 0.6881 | 0.4372 | 0.6881 | 0.6881 |
| 0.0526 | 15.51 | 13000 | 2.2252 | 0.6874 | 0.6874 | 0.4349 | 0.6874 | 0.6874 |
| 0.0427 | 16.71 | 14000 | 2.3141 | 0.6877 | 0.6877 | 0.4360 | 0.6877 | 0.6877 |
| 0.0482 | 17.9 | 15000 | 2.3349 | 0.6810 | 0.6810 | 0.4320 | 0.6810 | 0.6810 |
| 0.042 | 19.09 | 16000 | 2.3751 | 0.6846 | 0.6846 | 0.4355 | 0.6846 | 0.6846 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.8.1+cu111
- Datasets 1.6.2
- Tokenizers 0.12.1
|
huggingtweets/rubberpomade | ac0e75bace40fe8fb131f15d3e5fa18b7d6a5b3a | 2022-07-26T20:54:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/rubberpomade | 1 | null | transformers | 33,501 | ---
language: en
thumbnail: http://www.huggingtweets.com/rubberpomade/1658868837178/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1533674187302346752/ZMkiX-8g_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rocco (Comms 2/2)</div>
<div style="text-align: center; font-size: 14px;">@rubberpomade</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rocco (Comms 2/2).
| Data | Rocco (Comms 2/2) |
| --- | --- |
| Tweets downloaded | 986 |
| Retweets | 59 |
| Short tweets | 75 |
| Tweets kept | 852 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/f3r1i1wf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rubberpomade's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/53sh5gts) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/53sh5gts/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rubberpomade')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
JoAmps/xlm-roberta-base-finetuned-panx-de | 583610ec72d087883e870a68a7bd08e0b1057d0d | 2022-07-26T21:36:51.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | JoAmps | null | JoAmps/xlm-roberta-base-finetuned-panx-de | 1 | null | transformers | 33,502 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8616051071591427
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1378
- F1: 0.8616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2569 | 1.0 | 525 | 0.1617 | 0.8228 |
| 0.1295 | 2.0 | 1050 | 0.1326 | 0.8514 |
| 0.0816 | 3.0 | 1575 | 0.1378 | 0.8616 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Jmolano/bert-finetuned-ner | 7a24ceb6843f0c1cc5c3c1abdac922a56ad13ca7 | 2022-07-28T02:51:07.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Jmolano | null | Jmolano/bert-finetuned-ner | 1 | null | transformers | 33,503 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9327383903487027
- name: Recall
type: recall
value: 0.9498485358465163
- name: F1
type: f1
value: 0.9412157091636788
- name: Accuracy
type: accuracy
value: 0.9860923058809677
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0617
- Precision: 0.9327
- Recall: 0.9498
- F1: 0.9412
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0868 | 1.0 | 1756 | 0.0697 | 0.9204 | 0.9297 | 0.9250 | 0.9807 |
| 0.0342 | 2.0 | 3512 | 0.0647 | 0.9273 | 0.9465 | 0.9368 | 0.9853 |
| 0.0175 | 3.0 | 5268 | 0.0617 | 0.9327 | 0.9498 | 0.9412 | 0.9861 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ultra-coder54732/MiniLM-prop-16-train-set | dbff5509ce18f343c7dd445f96297b8470ded24a | 2022-07-27T00:45:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | ultra-coder54732 | null | ultra-coder54732/MiniLM-prop-16-train-set | 1 | null | transformers | 33,504 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: MiniLM-prop-16-train-set
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLM-prop-16-train-set
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
rosicast/hubert-base-ls960-korean-zeroth-kspon-jamo | 522b3350e646dda67581bd5eea24b2d3b80a9827 | 2022-07-29T00:00:30.000Z | [
"pytorch",
"hubert",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | rosicast | null | rosicast/hubert-base-ls960-korean-zeroth-kspon-jamo | 1 | null | transformers | 33,505 | Entry not found |
rosicast/hubert-large-ll60k-korean-zeroth-kspon-jamo | b451e28eb4f7ef70c373c439cb6fad98f0cc90c2 | 2022-07-29T04:07:24.000Z | [
"pytorch",
"hubert",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | rosicast | null | rosicast/hubert-large-ll60k-korean-zeroth-kspon-jamo | 1 | null | transformers | 33,506 | Entry not found |
voidful/phoneme-longt5-local | ce23e358c71151dae37c3b0b18d3d72b83f09730 | 2022-07-27T03:09:41.000Z | [
"pytorch",
"longt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | voidful | null | voidful/phoneme-longt5-local | 1 | null | transformers | 33,507 | Entry not found |
Atif-Memon/tRAINING-DATASET-All-files-final | 42484edf75f97293493e939adddd00871e4d4d03 | 2022-07-29T19:43:49.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | Atif-Memon | null | Atif-Memon/tRAINING-DATASET-All-files-final | 1 | null | transformers | 33,508 | Entry not found |
SummerChiam/pond_image_classification_8 | c0954f45620807f9cee4e4f835d3215614b7615a | 2022-07-27T05:26:02.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | SummerChiam | null | SummerChiam/pond_image_classification_8 | 1 | null | transformers | 33,509 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pond_image_classification_2
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9948979616165161
---
# pond_image_classification_2
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Algae

#### Boiling

#### BoilingNight

#### Normal

#### NormalCement

#### NormalNight

#### NormalRain
 |
IDEA-CCNL/Erlangshen-ZEN2-668M-Chinese | d4a2ddc12635a4fc6bd4756dbce2e5872d9c8162 | 2022-07-27T09:32:12.000Z | [
"pytorch",
"zh",
"arxiv:2105.01279",
"transformers",
"ZEN",
"chinese",
"license:apache-2.0"
] | null | false | IDEA-CCNL | null | IDEA-CCNL/Erlangshen-ZEN2-668M-Chinese | 1 | null | transformers | 33,510 | ---
language:
- zh
license: apache-2.0
tags:
- ZEN
- chinese
inference: false
---
# Erlangshen-ZEN2-668M-Chinese, one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
Erlangshen-ZEN2-668M-Chinese is an open-source Chinese pre-training model of the ZEN team on the [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). IDEA-CCNL refers to the [source code of ZEN2.0](https://github.com/sinovation/ZEN2) and the [paper of ZEN2.0](https://arxiv.org/abs/2105.01279), and provides the Chinese classification task and extraction task of ZEN2.0 effects and code samples. In the future, we will work with the ZEN team to explore the optimization direction of the pre-training model and continue to improve the effect of the pre-training model on classification and extraction tasks.
## Usage
There is no structure of ZEN2 in [Transformers](https://github.com/huggingface/transformers), you can run follow code to get structure of ZEN2 from [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
```
## load model
```python
from fengshen.models.zen2.ngram_utils import ZenNgramDict
from fengshen.models.zen2.tokenization import BertTokenizer
from fengshen.models.zen2.modeling import ZenModel
pretrain_path = 'IDEA-CCNL/Erlangshen-ZEN2-668M-Chinese'
tokenizer = BertTokenizer.from_pretrained(pretrain_path)
model = ZenForSequenceClassification.from_pretrained(pretrain_path)
# model = ZenForTokenClassification.from_pretrained(pretrain_path)
ngram_dict = ZenNgramDict.from_pretrained(pretrain_path, tokenizer=tokenizer)
```
You can get classification and extraction examples below.
[classification example on fengshen]()
[extraction example on fengshen]()
## Evaluation
### Classification
| Model(Acc) | afqmc | tnews | iflytek | ocnli | cmnli |
| :--------: | :-----: | :----: | :-----: | :----: | :----: |
| Erlangshen-ZEN2-345M-Chinese | 0.741 | 0.584 | 0.599 | 0.788 | 0.80 |
| Erlangshen-ZEN2-668M-Chinese | 0.75 | 0.60 | 0.589 | 0.81 | 0.82 |
### Extraction
| Model(F1) | WEIBO(test) | Resume(test) | MSRA(test) | OntoNote4.0(test) | CMeEE(dev) | CLUENER(dev) |
| :--------: | :-----: | :----: | :-----: | :----: | :----: | :----: |
| Erlangshen-ZEN2-345M-Chinese | 65.26 | 96.03 | 95.15 | 78.93 | 62.81 | 79.27 |
| Erlangshen-ZEN2-668M-Chinese | 70.02 | 96.08 | 95.13 | 80.89 | 63.37 | 79.22 |
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@article{Sinovation2021ZEN2,
title="{ZEN 2.0: Continue Training and Adaption for N-gram Enhanced Text Encoders}",
author={Yan Song, Tong Zhang, Yonggang Wang, Kai-Fu Lee},
journal={arXiv preprint arXiv:2105.01279},
year={2021},
}
``` |
PGT/graphnystromformer-artificial-balanced-max500-210000-1 | 55e746110f19f8a35180f0a114868cc1b8ce4222 | 2022-07-27T11:15:12.000Z | [
"pytorch",
"graph_nystromformer",
"text-classification",
"transformers"
] | text-classification | false | PGT | null | PGT/graphnystromformer-artificial-balanced-max500-210000-1 | 1 | null | transformers | 33,511 | Entry not found |
Billwzl/20split_dataset_version3 | 1499b785d996196ede81499f152afd0d6e1600f1 | 2022-07-28T16:20:35.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Billwzl | null | Billwzl/20split_dataset_version3 | 1 | null | transformers | 33,512 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: 20split_dataset_version3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20split_dataset_version3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1679 | 1.0 | 313 | 2.9768 |
| 2.9869 | 2.0 | 626 | 2.9299 |
| 2.8528 | 3.0 | 939 | 2.9176 |
| 2.7435 | 4.0 | 1252 | 2.9104 |
| 2.6458 | 5.0 | 1565 | 2.8863 |
| 2.5865 | 6.0 | 1878 | 2.8669 |
| 2.5218 | 7.0 | 2191 | 2.8802 |
| 2.4647 | 8.0 | 2504 | 2.8639 |
| 2.3933 | 9.0 | 2817 | 2.8543 |
| 2.3687 | 10.0 | 3130 | 2.8573 |
| 2.3221 | 11.0 | 3443 | 2.8398 |
| 2.276 | 12.0 | 3756 | 2.8415 |
| 2.2379 | 13.0 | 4069 | 2.8471 |
| 2.2427 | 14.0 | 4382 | 2.8318 |
| 2.1741 | 15.0 | 4695 | 2.8356 |
| 2.1652 | 16.0 | 5008 | 2.8310 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
domenicrosati/deberta-v3-large-finetuned-synthetic-paraphrase-only | 6388a91011a915cb3459a1338c4c41c28857c9ad | 2022-07-28T21:38:33.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | domenicrosati | null | domenicrosati/deberta-v3-large-finetuned-synthetic-paraphrase-only | 1 | null | transformers | 33,513 | ---
license: mit
tags:
- text-classification
- generated_from_trainer
metrics:
- f1
- precision
- recall
model-index:
- name: deberta-v3-large-finetuned-synthetic-paraphrase-only
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-finetuned-synthetic-paraphrase-only
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0120
- F1: 0.9768
- Precision: 0.9961
- Recall: 0.9583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:---------:|:------:|
| 0.0086 | 1.0 | 10205 | 0.0114 | 0.9642 | 0.9846 | 0.9446 |
| 0.0059 | 2.0 | 20410 | 0.0143 | 0.9658 | 0.9961 | 0.9373 |
| 0.0 | 3.0 | 30615 | 0.0141 | 0.9716 | 0.9961 | 0.9483 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ibm/re2g-generation-nq | 60ba54b61fa68c49b9de6ea45a3bcf6f657cb547 | 2022-07-29T16:03:57.000Z | [
"pytorch",
"rag",
"transformers",
"license:apache-2.0"
] | null | false | ibm | null | ibm/re2g-generation-nq | 1 | null | transformers | 33,514 | ---
license: apache-2.0
---
|
schnell/gpt2-xl-japanese | c692bb85d76751b2faeae4e10786ca0b40036cbf | 2022-07-27T23:33:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | schnell | null | schnell/gpt2-xl-japanese | 1 | null | transformers | 33,515 | Entry not found |
AykeeSalazar/vc-bantai-vit-withoutAMBI-adunest-trial | daa714f61d757611eb20c0d85867064527bb3518 | 2022-07-28T01:02:09.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:imagefolder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | AykeeSalazar | null | AykeeSalazar/vc-bantai-vit-withoutAMBI-adunest-trial | 1 | null | transformers | 33,516 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vc-bantai-vit-withoutAMBI-adunest-trial
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
args: Violation-Classification---Raw-9
metrics:
- name: Accuracy
type: accuracy
value: 0.7797741273100616
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vc-bantai-vit-withoutAMBI-adunest-trial
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4289
- Accuracy: 0.7798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.4 | 100 | 1.0782 | 0.4451 |
| No log | 0.8 | 200 | 0.5634 | 0.7156 |
| No log | 1.2 | 300 | 0.7181 | 0.6684 |
| No log | 1.61 | 400 | 0.4289 | 0.7798 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
huggingtweets/penguinnnno | c7ce29629a3d452ff45968ebcd71bacbdd4297dc | 2022-07-28T01:35:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/penguinnnno | 1 | null | transformers | 33,517 | ---
language: en
thumbnail: http://www.huggingtweets.com/penguinnnno/1658971968390/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1452082178741968901/oERkhKFL_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">penguino</div>
<div style="text-align: center; font-size: 14px;">@penguinnnno</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from penguino.
| Data | penguino |
| --- | --- |
| Tweets downloaded | 1865 |
| Retweets | 839 |
| Short tweets | 377 |
| Tweets kept | 649 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2hb9ovan/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @penguinnnno's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/4k058458) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/4k058458/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/penguinnnno')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
razhan/codeqmul-tokenizer | 7bcab7cd893f37066f4bd52b8fedb753f67b8dd1 | 2022-07-28T12:28:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | razhan | null | razhan/codeqmul-tokenizer | 1 | null | transformers | 33,518 | Entry not found |
Lvxue/finetuned-mt5-base | b16fdfee186d119492a59df677c58072e37b113f | 2022-07-30T10:02:58.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Lvxue | null | Lvxue/finetuned-mt5-base | 1 | null | transformers | 33,519 | Entry not found |
razhan/codeqmul-large | 84fb157a53d01f388790b45fb1941f53dfa04f1b | 2022-07-28T02:07:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | razhan | null | razhan/codeqmul-large | 1 | null | transformers | 33,520 | Entry not found |
AnonymousSub/recipes-roberta-base-tokenwise-token-and-step-losses_no_ingr | a8556cae93d3acd499d5bd49df5d980ef387d467 | 2022-07-28T02:12:20.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/recipes-roberta-base-tokenwise-token-and-step-losses_no_ingr | 1 | null | transformers | 33,521 | Entry not found |
Jmolano/bert-finetuned-ner-accelerate | a6bd4c3c93895afb5d5578707d641dcf00ec7e7a | 2022-07-28T03:15:12.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Jmolano | null | Jmolano/bert-finetuned-ner-accelerate | 1 | null | transformers | 33,522 | Entry not found |
amartyobanerjee/distilbert-base-uncased-finetuned-imdb | 96379aa5271a42eed8600d52b692ff85bcb96f32 | 2022-07-28T09:45:35.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | amartyobanerjee | null | amartyobanerjee/distilbert-base-uncased-finetuned-imdb | 1 | null | transformers | 33,523 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
jaeyeon/korean-aihub-learning-math-16batch | 5937917ca56b08d03980e92e0842a62f9ab8f7cb | 2022-07-28T08:13:59.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jaeyeon | null | jaeyeon/korean-aihub-learning-math-16batch | 1 | null | transformers | 33,524 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: korean-aihub-learning-math-16batch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# korean-aihub-learning-math-16batch
This model is a fine-tuned version of [kresnik/wav2vec2-large-xlsr-korean](https://huggingface.co/kresnik/wav2vec2-large-xlsr-korean) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1497
- Wer: 0.5260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 20 | 32.0718 | 1.0 |
| No log | 2.0 | 40 | 24.7403 | 1.0808 |
| No log | 3.0 | 60 | 5.8389 | 1.0 |
| No log | 4.0 | 80 | 4.8543 | 1.0 |
| 19.6583 | 5.0 | 100 | 4.4453 | 1.0 |
| 19.6583 | 6.0 | 120 | 4.3923 | 1.0 |
| 19.6583 | 7.0 | 140 | 4.2902 | 1.0 |
| 19.6583 | 8.0 | 160 | 3.9026 | 0.9959 |
| 19.6583 | 9.0 | 180 | 3.0616 | 0.9740 |
| 3.7358 | 10.0 | 200 | 2.2049 | 0.8534 |
| 3.7358 | 11.0 | 220 | 1.6666 | 0.7288 |
| 3.7358 | 12.0 | 240 | 1.4123 | 0.6603 |
| 3.7358 | 13.0 | 260 | 1.3113 | 0.6164 |
| 3.7358 | 14.0 | 280 | 1.2269 | 0.6356 |
| 0.8398 | 15.0 | 300 | 1.2349 | 0.5945 |
| 0.8398 | 16.0 | 320 | 1.1970 | 0.5658 |
| 0.8398 | 17.0 | 340 | 1.2144 | 0.5562 |
| 0.8398 | 18.0 | 360 | 1.2551 | 0.5658 |
| 0.8398 | 19.0 | 380 | 1.1971 | 0.5493 |
| 0.2649 | 20.0 | 400 | 1.1967 | 0.5247 |
| 0.2649 | 21.0 | 420 | 1.2796 | 0.5849 |
| 0.2649 | 22.0 | 440 | 1.2156 | 0.5521 |
| 0.2649 | 23.0 | 460 | 1.2118 | 0.5425 |
| 0.2649 | 24.0 | 480 | 1.1637 | 0.5384 |
| 0.1801 | 25.0 | 500 | 1.1846 | 0.5562 |
| 0.1801 | 26.0 | 520 | 1.1927 | 0.5534 |
| 0.1801 | 27.0 | 540 | 1.2015 | 0.5384 |
| 0.1801 | 28.0 | 560 | 1.2077 | 0.5397 |
| 0.1801 | 29.0 | 580 | 1.1554 | 0.5260 |
| 0.1364 | 30.0 | 600 | 1.1497 | 0.5260 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
amartyobanerjee/distilbert-base-uncased-whole-word-word-ids-finetuned-imdb | 7db323fe6e616d9b2b97918f05df9de5fb2f7360 | 2022-07-28T10:01:48.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | amartyobanerjee | null | amartyobanerjee/distilbert-base-uncased-whole-word-word-ids-finetuned-imdb | 1 | null | transformers | 33,525 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-whole-word-word-ids-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-whole-word-word-ids-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7261 | 1.0 | 157 | 0.6532 |
| 0.6766 | 2.0 | 314 | 0.6514 |
| 0.6677 | 3.0 | 471 | 0.6555 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Atharvgarg/bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news | a40504b3140d80a6eb64a7c7524d55fca156f654 | 2022-07-28T15:22:19.000Z | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"summarisation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Atharvgarg | null | Atharvgarg/bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news | 1 | null | transformers | 33,526 | ---
license: apache-2.0
tags:
- summarisation
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news
This model is a fine-tuned version of [mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization](https://huggingface.co/mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6835
- Rouge1: 58.9345
- Rouge2: 47.1037
- Rougel: 40.9839
- Rougelsum: 57.6981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 0.8246 | 1.0 | 223 | 0.7050 | 55.7882 | 42.9793 | 38.4511 | 54.3125 |
| 0.6414 | 2.0 | 446 | 0.6834 | 55.149 | 42.664 | 38.3864 | 53.7712 |
| 0.5603 | 3.0 | 669 | 0.6815 | 56.9756 | 44.8057 | 39.1377 | 55.5815 |
| 0.5079 | 4.0 | 892 | 0.6749 | 57.7397 | 45.6267 | 40.0509 | 56.3886 |
| 0.4622 | 5.0 | 1115 | 0.6781 | 58.07 | 45.9102 | 40.2704 | 56.7008 |
| 0.4263 | 6.0 | 1338 | 0.6798 | 58.1215 | 45.976 | 40.256 | 56.8203 |
| 0.399 | 7.0 | 1561 | 0.6798 | 58.5486 | 46.6901 | 40.8045 | 57.2947 |
| 0.3815 | 8.0 | 1784 | 0.6835 | 58.9345 | 47.1037 | 40.9839 | 57.6981 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Vlasta/DNADebertaSentencepiece10k | 30a2398fc69654482e85b875ae4d64a14fe1053a | 2022-07-28T16:12:43.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Vlasta | null | Vlasta/DNADebertaSentencepiece10k | 1 | null | transformers | 33,527 | Entry not found |
qinzhen4/finetuning-sentiment-model-3000-samples | bb1dd45f1c89291cb82f54ecc2659a7c3d2bfcc8 | 2022-07-29T18:57:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | qinzhen4 | null | qinzhen4/finetuning-sentiment-model-3000-samples | 1 | null | transformers | 33,528 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8833333333333333
- name: F1
type: f1
value: 0.8844884488448845
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3200
- Accuracy: 0.8833
- F1: 0.8845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Atharvgarg/bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-Sumy | dc9cdcf435be23f4f557bd587d79abf8fb8170de | 2022-07-28T23:32:03.000Z | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"summarisation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Atharvgarg | null | Atharvgarg/bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-Sumy | 1 | null | transformers | 33,529 | ---
license: apache-2.0
tags:
- summarisation
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-Sumy
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-Sumy
This model is a fine-tuned version of [mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization](https://huggingface.co/mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5583
- Rouge1: 55.2899
- Rouge2: 43.2426
- Rougel: 38.5056
- Rougelsum: 53.8807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 1.7407 | 1.0 | 223 | 1.5900 | 51.3058 | 38.3952 | 35.7343 | 49.7129 |
| 1.4813 | 2.0 | 446 | 1.5500 | 53.8089 | 41.2455 | 37.3864 | 52.3387 |
| 1.3517 | 3.0 | 669 | 1.5429 | 53.4914 | 40.907 | 37.1428 | 52.0338 |
| 1.2432 | 4.0 | 892 | 1.5472 | 54.1139 | 41.3589 | 37.6392 | 52.711 |
| 1.1748 | 5.0 | 1115 | 1.5426 | 55.3482 | 43.312 | 38.0625 | 54.0424 |
| 1.1108 | 6.0 | 1338 | 1.5529 | 55.4752 | 43.3561 | 38.5813 | 54.1141 |
| 1.0745 | 7.0 | 1561 | 1.5539 | 55.705 | 43.6772 | 38.7629 | 54.3892 |
| 1.0428 | 8.0 | 1784 | 1.5583 | 55.2899 | 43.2426 | 38.5056 | 53.8807 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
platzi/platzi-vit_model | 5a87b9520561f6050f7b96bcb7271983ebfaffbe | 2022-07-29T15:42:58.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:beans",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | platzi | null | platzi/platzi-vit_model | 1 | null | transformers | 33,530 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: platzi-vit_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9924812030075187
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0174
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.132 | 3.85 | 500 | 0.0174 | 0.9925 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
rosicast/hubert-base-ls960-korean-zeroth-char | d1335dacf8d2db1aa2d280a7e49536b5021df15d | 2022-07-30T10:03:01.000Z | [
"pytorch",
"hubert",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | rosicast | null | rosicast/hubert-base-ls960-korean-zeroth-char | 1 | null | transformers | 33,531 | Entry not found |
eclat12450/fine-tuned-NSPKcBert-v3-10 | 25f2d6963c9f48c30dfb8ec807a8d0c7415ce319 | 2022-07-29T02:59:42.000Z | [
"pytorch",
"bert",
"next-sentence-prediction",
"transformers"
] | null | false | eclat12450 | null | eclat12450/fine-tuned-NSPKcBert-v3-10 | 1 | null | transformers | 33,532 | Entry not found |
ArnavL/roberta-10M-imdb-0 | e456c11aa194384987f95cd4df8c9fb4e597d17c | 2022-07-29T03:42:02.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ArnavL | null | ArnavL/roberta-10M-imdb-0 | 1 | null | transformers | 33,533 | Entry not found |
rosicast/hubert-large-ll60k-korean-zeroth-char | 0f325735c802b5a7cc224b134b56af2fbe682755 | 2022-07-30T09:25:43.000Z | [
"pytorch",
"hubert",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | rosicast | null | rosicast/hubert-large-ll60k-korean-zeroth-char | 1 | null | transformers | 33,534 | Entry not found |
chintagunta85/test_ner_5 | 1c51c6f319a8c3f0cc0171cfb99e43680f1b5c89 | 2022-07-29T06:19:35.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | chintagunta85 | null | chintagunta85/test_ner_5 | 1 | null | transformers | 33,535 | Entry not found |
Doohae/lassl-koelectra-small | 620ab9ad76849e86e4094afa17af0f4486123bc4 | 2022-07-29T07:28:48.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | Doohae | null | Doohae/lassl-koelectra-small | 1 | null | transformers | 33,536 | # ELECTRA discriminator small
- pretrained with large Korean corpus datasets (30GB)
- 13.7M model parameters (followed google/electra-small-discriminator config)
- 32,000 vocab size
- trained for 1,000,000 steps
- build with [lassl](https://github.com/lassl/lassl) framework
pretrain-data
┣ korean_corpus.txt
┣ kowiki_latest.txt
┣ modu_dialogue_v1.2.txt
┣ modu_news_v1.1.txt
┣ modu_news_v2.0.txt
┣ modu_np_2021_v1.0.txt
┣ modu_np_v1.1.txt
┣ modu_spoken_v1.2.txt
┗ modu_written_v1.0.txt |
SummerChiam/pond_image_classification_4 | 54a2e9d120f326f67f9a180b347a22fef5bbe980 | 2022-07-29T07:25:50.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | SummerChiam | null | SummerChiam/pond_image_classification_4 | 1 | null | transformers | 33,537 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pond_image_classification_4
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9783163070678711
---
# pond_image_classification_4
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Algae

#### Boiling

#### BoilingNight

#### Normal

#### NormalCement

#### NormalNight

#### NormalRain
 |
BramVanroy/bert-base-multilingual-cased-hebban-reviews5 | e05688eafadd08b0495dba2184f4643e39386563 | 2022-07-29T09:54:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"nl",
"dataset:BramVanroy/hebban-reviews",
"transformers",
"sentiment-analysis",
"dutch",
"text",
"license:mit",
"model-index"
] | text-classification | false | BramVanroy | null | BramVanroy/bert-base-multilingual-cased-hebban-reviews5 | 1 | null | transformers | 33,538 | ---
datasets:
- BramVanroy/hebban-reviews
language:
- nl
license: mit
metrics:
- accuracy
- f1
- precision
- qwk
- recall
model-index:
- name: bert-base-multilingual-cased-hebban-reviews5
results:
- dataset:
config: filtered_rating
name: BramVanroy/hebban-reviews - filtered_rating - 2.0.0
revision: 2.0.0
split: test
type: BramVanroy/hebban-reviews
metrics:
- name: Test accuracy
type: accuracy
value: 0.5898668639053254
- name: Test f1
type: f1
value: 0.5899204480029937
- name: Test precision
type: precision
value: 0.5971431895675179
- name: Test qwk
type: qwk
value: 0.7050840079198698
- name: Test recall
type: recall
value: 0.5898668639053254
task:
name: sentiment analysis
type: text-classification
tags:
- sentiment-analysis
- dutch
- text
widget:
- text: Wauw, wat een leuk boek! Ik heb me er er goed mee vermaakt.
- text: Nee, deze vond ik niet goed. De auteur doet zijn best om je als lezer mee
te trekken in het verhaal maar mij overtuigt het alleszins niet.
- text: Ik vind het niet slecht maar de schrijfstijl trekt me ook niet echt aan. Het
wordt een beetje saai vanaf het vijfde hoofdstuk
---
# bert-base-multilingual-cased-hebban-reviews5
# Dataset
- dataset_name: BramVanroy/hebban-reviews
- dataset_config: filtered_rating
- dataset_revision: 2.0.0
- labelcolumn: review_rating0
- textcolumn: review_text_without_quotes
# Training
- optim: adamw_hf
- learning_rate: 5e-05
- per_device_train_batch_size: 64
- per_device_eval_batch_size: 64
- gradient_accumulation_steps: 1
- max_steps: 5001
- save_steps: 500
- metric_for_best_model: qwk
# Best checkedpoint based on validation
- best_metric: 0.697825193570947
- best_model_checkpoint: trained/hebban-reviews5/bert-base-multilingual-cased/checkpoint-4500
# Test results of best checkpoint
- accuracy: 0.5898668639053254
- f1: 0.5899204480029937
- precision: 0.5971431895675179
- qwk: 0.7050840079198698
- recall: 0.5898668639053254
## Confusion matric

## Normalized confusion matrix

# Environment
- cuda_capabilities: 8.0; 8.0
- cuda_device_count: 2
- cuda_devices: NVIDIA A100-SXM4-80GB; NVIDIA A100-SXM4-80GB
- finetuner_commit: 8159b4c1d5e66b36f68dd263299927ffb8670ebd
- platform: Linux-4.18.0-305.49.1.el8_4.x86_64-x86_64-with-glibc2.28
- python_version: 3.9.5
- toch_version: 1.10.0
- transformers_version: 4.21.0
|
SummerChiam/pond_image_classification_6 | 03d323b7d34822aa9ffa2a31aab365184a19fa76 | 2022-07-29T08:19:54.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | SummerChiam | null | SummerChiam/pond_image_classification_6 | 1 | null | transformers | 33,539 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pond_image_classification_6
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9948979616165161
---
# pond_image_classification_6
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Algae

#### Boiling

#### BoilingNight

#### Normal

#### NormalCement

#### NormalNight

#### NormalRain
 |
SummerChiam/pond_image_classification_7 | 5d523877316b46a9d595ff2f3d47cba8872d438d | 2022-07-29T08:32:46.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | SummerChiam | null | SummerChiam/pond_image_classification_7 | 1 | null | transformers | 33,540 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pond_image_classification_7
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9936224222183228
---
# pond_image_classification_7
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Algae

#### Boiling

#### BoilingNight

#### Normal

#### NormalCement

#### NormalNight

#### NormalRain
 |
RRajesh27/finetuning-sentiment-model-3000-samples | 268bfcdffd7631904c91d9857576c3266c45c70b | 2022-07-29T08:51:28.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | RRajesh27 | null | RRajesh27/finetuning-sentiment-model-3000-samples | 1 | null | transformers | 33,541 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.8666666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3236
- Accuracy: 0.8667
- F1: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
SummerChiam/pond_image_classification_9 | c560f05c7d11e898991f13ab999a6bc6d359e98b | 2022-07-29T09:13:48.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | SummerChiam | null | SummerChiam/pond_image_classification_9 | 1 | null | transformers | 33,542 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pond_image_classification_9
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9974489808082581
---
# pond_image_classification_9
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Algae

#### Boiling

#### BoilingNight

#### Normal

#### NormalCement

#### NormalNight

#### NormalRain
 |
olemeyer/zero_shot_issue_classification_bart-base-32-d | af05b44a414e00508fb2a29b30c72746370ed92a | 2022-07-29T23:46:30.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | olemeyer | null | olemeyer/zero_shot_issue_classification_bart-base-32-d | 1 | null | transformers | 33,543 | Entry not found |
raisin2402/marian-finetuned-kde4-en-to-fr | 7403053653cb381758c409ee4f72fb0db3bce1d0 | 2022-07-29T12:59:05.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:kde4",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | raisin2402 | null | raisin2402/marian-finetuned-kde4-en-to-fr | 1 | null | transformers | 33,544 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.83113187001415
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8560
- Bleu: 52.8311
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Amine007/distilgpt2-finetuned-wikitext2 | 246e363e2f2ac29ec4096cc1ef3cefea1f180c47 | 2022-07-29T14:15:42.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | Amine007 | null | Amine007/distilgpt2-finetuned-wikitext2 | 1 | null | transformers | 33,545 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.653 | 2.0 | 4668 | 3.6472 |
| 3.6006 | 3.0 | 7002 | 3.6421 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
platzi/platzi-bert-base-mrpc-glue-omar-espejel | 2dbce0925c2328d5da873d4eb029af397e21a217 | 2022-07-29T13:50:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | platzi | null | platzi/platzi-bert-base-mrpc-glue-omar-espejel | 1 | null | transformers | 33,546 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: platzi-bert-base-mrpc-glue-omar-espejel
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: train
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8578431372549019
- name: F1
type: f1
value: 0.8941605839416058
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-bert-base-mrpc-glue-omar-espejel
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.4366
- Accuracy: 0.8578
- F1: 0.8942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5221 | 1.09 | 500 | 0.4366 | 0.8578 | 0.8942 |
| 0.3114 | 2.18 | 1000 | 0.6581 | 0.8725 | 0.9113 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sumba/covid-twitter-bert-v2-no_description-stance | 73aab0210a4b18392c29998c978afa54417bc73e | 2022-07-29T17:11:45.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | sumba | null | sumba/covid-twitter-bert-v2-no_description-stance | 1 | null | transformers | 33,547 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: covid-twitter-bert-v2-no_description-stance
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-twitter-bert-v2-no_description-stance
This model is a fine-tuned version of [digitalepidemiologylab/covid-twitter-bert-v2](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5898
- Accuracy: 0.1814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8009 | 1.0 | 632 | 0.5898 | 0.1814 |
| 0.5663 | 2.0 | 1264 | 0.5613 | 0.0750 |
| 0.394 | 3.0 | 1896 | 0.6726 | 0.0347 |
| 0.1677 | 4.0 | 2528 | 0.8051 | 0.0269 |
| 0.08 | 5.0 | 3160 | 0.8690 | 0.0202 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
ArthurZ/opt-350m-dummy-sc | d95ac8ae641f4f364688a6bda9754a5996e4f536 | 2022-07-29T15:53:54.000Z | [
"pytorch",
"opt",
"text-classification",
"transformers"
] | text-classification | false | ArthurZ | null | ArthurZ/opt-350m-dummy-sc | 1 | null | transformers | 33,548 | Entry not found |
natalierobbins/pos_test_model_1 | e3f3b1b2c4c9c1662193295137ce90f2445ab3f2 | 2022-07-29T19:21:52.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | natalierobbins | null | natalierobbins/pos_test_model_1 | 1 | null | transformers | 33,549 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: pos_test_model_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pos_test_model_1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1521
- Accuracy: 0.9530
- F1: 0.9523
- Precision: 0.9576
- Recall: 0.9530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.1882 | 1.0 | 1744 | 0.1521 | 0.9530 | 0.9523 | 0.9576 | 0.9530 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
schnell/bert-small-spm | 705a3ae6f0dacbae4f755568a066175dcf357969 | 2022-07-30T06:10:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | schnell | null | schnell/bert-small-spm | 1 | null | transformers | 33,550 | Entry not found |
ibm/re2g-reranker-nq | 40c10898c7f5af5efff0beebc5633739be68bcd1 | 2022-07-29T16:08:35.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:apache-2.0"
] | text-classification | false | ibm | null | ibm/re2g-reranker-nq | 1 | null | transformers | 33,551 | ---
license: apache-2.0
---
|
eduagarcia/r_j_v2_checkpoint_36_48000 | 21d58740e46dbd3b9f730c9a61d990c76334828e | 2022-07-29T16:22:27.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | eduagarcia | null | eduagarcia/r_j_v2_checkpoint_36_48000 | 1 | null | transformers | 33,552 | Entry not found |
jungjongho/wav2vec2-large-xlsr-korean-demo-colab_epoch15 | d20ec977df62c0701ff0a3b4880f497cd8e562e5 | 2022-07-29T21:25:56.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jungjongho | null | jungjongho/wav2vec2-large-xlsr-korean-demo-colab_epoch15 | 1 | null | transformers | 33,553 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-korean-demo-colab_epoch15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-korean-demo-colab_epoch15
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4133
- Wer: 0.3801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 16.9017 | 0.8 | 400 | 4.6273 | 1.0 |
| 4.4633 | 1.6 | 800 | 4.4419 | 1.0 |
| 4.2262 | 2.4 | 1200 | 3.8477 | 0.9994 |
| 2.4402 | 3.21 | 1600 | 1.3564 | 0.8111 |
| 1.3499 | 4.01 | 2000 | 0.9070 | 0.6664 |
| 0.9922 | 4.81 | 2400 | 0.7496 | 0.6131 |
| 0.8271 | 5.61 | 2800 | 0.6240 | 0.5408 |
| 0.6918 | 6.41 | 3200 | 0.5506 | 0.5026 |
| 0.6015 | 7.21 | 3600 | 0.5303 | 0.4935 |
| 0.5435 | 8.02 | 4000 | 0.4951 | 0.4696 |
| 0.4584 | 8.82 | 4400 | 0.4677 | 0.4432 |
| 0.4258 | 9.62 | 4800 | 0.4602 | 0.4307 |
| 0.3906 | 10.42 | 5200 | 0.4456 | 0.4195 |
| 0.3481 | 11.22 | 5600 | 0.4265 | 0.4062 |
| 0.3216 | 12.02 | 6000 | 0.4241 | 0.4046 |
| 0.2908 | 12.83 | 6400 | 0.4106 | 0.3941 |
| 0.2747 | 13.63 | 6800 | 0.4146 | 0.3855 |
| 0.2633 | 14.43 | 7200 | 0.4133 | 0.3801 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
sumba/covid-twitter-bert-v2-no_description-stance-processed | bf84c59292656e8af987bbe170fa867c087f81db | 2022-07-29T17:17:24.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | sumba | null | sumba/covid-twitter-bert-v2-no_description-stance-processed | 1 | null | transformers | 33,554 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: covid-twitter-bert-v2-no_description-stance-processed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-twitter-bert-v2-no_description-stance-processed
This model is a fine-tuned version of [digitalepidemiologylab/covid-twitter-bert-v2](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7178
- Accuracy: 0.3158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8419 | 1.0 | 632 | 0.7178 | 0.3158 |
| 0.6041 | 2.0 | 1264 | 0.5969 | 0.1041 |
| 0.4775 | 3.0 | 1896 | 0.5881 | 0.0829 |
| 0.2344 | 4.0 | 2528 | 0.8113 | 0.0470 |
| 0.15 | 5.0 | 3160 | 0.9235 | 0.0347 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Migga/ViT-BERT-Chess-V4 | e151dc069281e5bf471d1c46d1cddf28ce65b7b9 | 2022-07-30T04:26:03.000Z | [
"pytorch",
"vision-encoder-decoder",
"transformers",
"generated_from_trainer",
"model-index"
] | null | false | Migga | null | Migga/ViT-BERT-Chess-V4 | 1 | null | transformers | 33,555 | ---
tags:
- generated_from_trainer
model-index:
- name: ViT-BERT-Chess-V4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT-BERT-Chess-V4
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.705 | 1.0 | 3895 | 3.5686 |
| 3.5139 | 2.0 | 7790 | 3.4288 |
| 3.4156 | 3.0 | 11685 | 3.3663 |
| 3.3661 | 4.0 | 15580 | 3.3331 |
| 3.3352 | 5.0 | 19475 | 3.3213 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu116
- Datasets 2.3.2
- Tokenizers 0.12.1
|
clefourrier/graphnystromformer-large-cf-artificial-balanced-max500-105000-1 | 7c749b02b1537e6c3dfbdaab7c461a442bdb7bed | 2022-07-29T17:03:39.000Z | [
"pytorch",
"graph_nystromformer",
"text-classification",
"transformers"
] | text-classification | false | clefourrier | null | clefourrier/graphnystromformer-large-cf-artificial-balanced-max500-105000-1 | 1 | null | transformers | 33,556 | Entry not found |
Atharvgarg/bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-extracted-sumy | 96004db5e98c827d9bd0c01226042c548a0b1d43 | 2022-07-29T17:50:17.000Z | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"summarisation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Atharvgarg | null | Atharvgarg/bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-extracted-sumy | 1 | null | transformers | 33,557 | ---
license: apache-2.0
tags:
- summarisation
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-extracted-sumy
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-extracted-sumy
This model is a fine-tuned version of [mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization](https://huggingface.co/mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3228
- Rouge1: 56.5706
- Rouge2: 43.0906
- Rougel: 47.9957
- Rougelsum: 53.417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 0.3226 | 1.0 | 223 | 0.3225 | 55.7639 | 41.9414 | 46.9804 | 52.5639 |
| 0.262 | 2.0 | 446 | 0.3198 | 55.7522 | 42.0929 | 46.8388 | 52.6659 |
| 0.2153 | 3.0 | 669 | 0.3195 | 55.7091 | 42.2111 | 47.2641 | 52.5765 |
| 0.1805 | 4.0 | 892 | 0.3164 | 55.8115 | 42.5536 | 47.3529 | 52.7672 |
| 0.1527 | 5.0 | 1115 | 0.3203 | 56.8658 | 43.4238 | 48.2268 | 53.8136 |
| 0.14 | 6.0 | 1338 | 0.3234 | 55.7138 | 41.8562 | 46.8362 | 52.5201 |
| 0.1252 | 7.0 | 1561 | 0.3228 | 56.5706 | 43.0906 | 47.9957 | 53.417 |
| 0.1229 | 8.0 | 1784 | 0.3228 | 56.5706 | 43.0906 | 47.9957 | 53.417 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ibm/re2g-generation-trex | bf2ac30ac727cd02ad9a4acc12826753731daf45 | 2022-07-29T18:06:05.000Z | [
"pytorch",
"rag",
"transformers",
"license:apache-2.0"
] | null | false | ibm | null | ibm/re2g-generation-trex | 1 | null | transformers | 33,558 | ---
license: apache-2.0
---
|
ibm/re2g-reranker-trex | b8ccc5d5be594d9569fd0eb57ce3a4e2bfe6acd8 | 2022-07-29T18:10:28.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:apache-2.0"
] | text-classification | false | ibm | null | ibm/re2g-reranker-trex | 1 | null | transformers | 33,559 | ---
license: apache-2.0
---
|
sumba/covid-twitter-bert-v2-with_description-stance | 57b5a319b733d0913f13bdaa83430110216b38e8 | 2022-07-29T18:50:26.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | sumba | null | sumba/covid-twitter-bert-v2-with_description-stance | 1 | null | transformers | 33,560 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: covid-twitter-bert-v2-with_description-stance
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-twitter-bert-v2-with_description-stance
This model is a fine-tuned version of [digitalepidemiologylab/covid-twitter-bert-v2](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6433
- Accuracy: 0.2486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9001 | 1.0 | 632 | 0.6433 | 0.2486 |
| 0.6247 | 2.0 | 1264 | 0.5531 | 0.0829 |
| 0.4811 | 3.0 | 1896 | 0.6068 | 0.0694 |
| 0.2546 | 4.0 | 2528 | 0.7426 | 0.0414 |
| 0.1365 | 5.0 | 3160 | 0.8197 | 0.0392 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ibm/re2g-ctx-encoder-triviaqa | 52c28d387d659c8d4941a7676f289444e9cab225 | 2022-07-29T19:03:55.000Z | [
"pytorch",
"dpr",
"transformers",
"license:apache-2.0"
] | null | false | ibm | null | ibm/re2g-ctx-encoder-triviaqa | 1 | null | transformers | 33,561 | ---
license: apache-2.0
---
|
clefourrier/graphnystromformer-small-cf-artificial-unbalanced-nodes-468000-0 | e9b615ff564054f34bc4e1dcc2890f6cd61044f4 | 2022-07-29T19:48:22.000Z | [
"pytorch",
"graph_nystromformer",
"text-classification",
"transformers"
] | text-classification | false | clefourrier | null | clefourrier/graphnystromformer-small-cf-artificial-unbalanced-nodes-468000-0 | 1 | null | transformers | 33,562 | Entry not found |
clefourrier/nystromformer-large-cf-artificial-balanced-max500-105000-1 | 37d033841122df2de1b991df4513be9366818524 | 2022-07-29T21:13:05.000Z | [
"pytorch",
"graph_nystromformer",
"text-classification",
"transformers"
] | text-classification | false | clefourrier | null | clefourrier/nystromformer-large-cf-artificial-balanced-max500-105000-1 | 1 | null | transformers | 33,563 | Entry not found |
clefourrier/graphnystromformer-cf-artificial-balanced-max500-490000-1 | 09197236044430e871c3540a9a4efa3f0329fe87 | 2022-07-29T23:07:03.000Z | [
"pytorch",
"graph_nystromformer",
"text-classification",
"transformers"
] | text-classification | false | clefourrier | null | clefourrier/graphnystromformer-cf-artificial-balanced-max500-490000-1 | 1 | null | transformers | 33,564 | Entry not found |
muhtasham/tiny-bert-finetuned-ner-accelerate-gpu | 33ebd025a38d5440841ad0cce2f05b65184798d4 | 2022-07-30T00:51:09.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | muhtasham | null | muhtasham/tiny-bert-finetuned-ner-accelerate-gpu | 1 | null | transformers | 33,565 | Entry not found |
sophiestein/experiment-finetuned-ner | 7f97bca8f8e552967a728ffe20095c1cc32cbb2d | 2022-07-30T04:37:44.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | sophiestein | null | sophiestein/experiment-finetuned-ner | 1 | null | transformers | 33,566 | Entry not found |
fzwd6666/dummy-model | 8c358aa8597d6a6147efdd92ba8142ad2b954026 | 2022-07-30T00:52:28.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | fzwd6666 | null | fzwd6666/dummy-model | 1 | null | transformers | 33,567 | Entry not found |
eclat12450/fine-tuned-NSPKcBert-v4-10 | 67949b19faf98ea938c4c6b00c24ab92c3ecd633 | 2022-07-30T03:14:48.000Z | [
"pytorch",
"bert",
"next-sentence-prediction",
"transformers"
] | null | false | eclat12450 | null | eclat12450/fine-tuned-NSPKcBert-v4-10 | 1 | null | transformers | 33,568 | Entry not found |
clefourrier/nystromformer-small-cf-artificial-unbalanced-nodes-468000-0 | ef9331a2f17fe8a0e184f116ccc2af7282e2ac0b | 2022-07-30T03:33:44.000Z | [
"pytorch",
"graph_nystromformer",
"text-classification",
"transformers"
] | text-classification | false | clefourrier | null | clefourrier/nystromformer-small-cf-artificial-unbalanced-nodes-468000-0 | 1 | null | transformers | 33,569 | Entry not found |
CurtisBowser/DialoGPT-medium-sora-two | f0997b102f566d3e0cd66084092a3c6a3208994b | 2021-11-04T03:31:02.000Z | [
"pytorch",
"conversational"
] | conversational | false | CurtisBowser | null | CurtisBowser/DialoGPT-medium-sora-two | 0 | null | null | 33,570 | ---
tags:
- conversational
---
# Sora DialoGPT Model |
Darren/darren | a0a7f41ba55077fb255e60484e663f7e765f4464 | 2022-01-14T13:14:04.000Z | [
"pytorch"
] | null | false | Darren | null | Darren/darren | 0 | null | null | 33,571 | Entry not found |
JihyukKim/cbert-aleqd-s100-b36-g2-ib-hn | 96c749b1072f9089e440f7f0404d54fabfa2b438 | 2022-01-05T21:03:04.000Z | [
"pytorch"
] | null | false | JihyukKim | null | JihyukKim/cbert-aleqd-s100-b36-g2-ib-hn | 0 | null | null | 33,572 | Entry not found |
JihyukKim/cbert-b36-g2-ib-hn | 457718f963e08033eef4462d2be226ac6bb6839b | 2022-01-05T20:56:08.000Z | [
"pytorch"
] | null | false | JihyukKim | null | JihyukKim/cbert-b36-g2-ib-hn | 0 | null | null | 33,573 | Entry not found |
LysandreJik/metnet-test | 139cacb71093961d28fa81a53560aded435b92a4 | 2021-09-07T19:34:52.000Z | [
"pytorch"
] | null | false | LysandreJik | null | LysandreJik/metnet-test | 0 | null | null | 33,574 | Entry not found |
NovelAI/genji-python-6B-split | 890390be84051bcdb60036e0af158a47dad96f8a | 2021-08-06T18:57:56.000Z | [
"en",
"dataset:the Pile",
"arxiv:2104.09864",
"pytorch",
"causal-lm",
"license:apache-2.0"
] | null | false | NovelAI | null | NovelAI/genji-python-6B-split | 0 | null | null | 33,575 | ---
language:
- en
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- the Pile
---
# Genji-python 6B
For example usage or to easily use the model you can check our colab notebook:
[Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing)
## Model Description
Genji is a transformer model finetuned on EleutherAI's GPT-J 6B model. This particular model is trained on python only code approaching 4GB in size.
Split model has the checkpoints splitted, which makes it use less system RAM while loading and makes it faster to load.
This model needs more effort to set up as you need to install git-lfs and pull the repo.
| Hyperparameter | Value |
|-------------------|--------|
| n_parameters | 6,053,381,344 |
| n_layers | 28* |
| d_model | 4,096 |
| d_ff | 16,384 |
| n_heads | 16 |
| d_head | 256 |
| n_ctx | 2,048 |
| n_vocab | 50,400 (same tokenizer as GPT-2/3) |
| position encoding | [Rotary position encodings (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
`*` each layer consists of one feedforward block and one self attention block
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary position encodings (RoPE) was applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Training data
GPT-J 6B was pretrained on the [Pile](pile.eleuther.ai), a large scale curated dataset created by EleutherAI for the purpose of training this model. After the pre-training, it's finetuned on the python code that was taken from the Pile.
## Training procedure
Genji-python-6B is trained for 20k steps on around 655 million tokens with learning rate of 2e-06
## Intended Use
This model is trained for assistence on writing python code and having fun trying weird stuff with it.
### How to use
This model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable.
For now, you need to use this fork:
[Fork](https://github.com/finetuneanon/transformers)
to install with pip:
```bash
pip install git+https://github.com/finetuneanon/transformers@gpt-neo-localattention3-rp-b
```
**git-lfs** also needs to be installed, on ubuntu:
```bash
apt install git-lfs
```
after it's installed, initialize git-lfs:
```bash
git lfs install
```
then clone this repo:
```bash
git clone https://huggingface.co/NovelAI/genji-python-6B-split
```
Now we can load the model.
We recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards.
How to use:
```python
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
GPTNeoForCausalLM,
)
model = AutoModelForCausalLM.from_pretrained("genji-python-6B-split/model").half().eval().cuda()
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B")
text = '''def print_customer_name'''
tokens = tokenizer(text, return_tensors="pt").input_ids
generated_tokens = model.generate(tokens.long().cuda(), use_cache=True, do_sample=True, top_k=50, temperature=0.3, top_p=0.9, repetition_penalty=1.125, min_length=1, max_length=len(tokens[0]) + 400, pad_token_id=tokenizer.eos_token_id)
last_tokens = generated_tokens[0][len(tokens[0]):]
generated_text = tokenizer.decode(last_tokens)
print("Generation:\n" + generated_text)
```
When ran, this code generates:
```python
Prompt:
def print_customer_name
Generation:
(self, customer):
"""Print the name of a customer."""
if not self.is_valid():
return
print("Customer: {}".format(customer))
```
For example usage, you can see our colab notebook as well:
[Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing)
## Eval results
TBD
## Acknowledgements
This project was possible because of the compute provided by the
[TPU Research Cloud](https://sites.research.google/trc/) and [EleutherAI](https://eleuther.ai/) for pretraining of the GPT-J 6B.
Thanks to everyone who contributed to this project:
- [Aero](https://github.com/AeroScripts)
- [Finetune](https://github.com/finetuneanon)
- [Kurumuz](https://github.com/kurumuz) |
SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii | dc33fcdc3beef40c79454e675303451a8af49572 | 2021-10-23T04:03:28.000Z | [
"multilingual",
"dataset:Commonlit-Readibility",
"kaggle",
"rembert",
"pytorch",
"question-answering",
"license:cc0-1.0"
] | question-answering | false | SauravMaheshkar | null | SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii | 0 | null | null | 33,576 | ---
thumbnail: https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true
tags:
- kaggle
- rembert
- pytorch
- question-answering
language: multilingual
license: cc0-1.0
inference: false
datasets:
- Commonlit-Readibility
---
<div align = "center">
<img src = "https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true">
</div>
This dataset contains the [**google/rembert**](https://huggingface.co/transformers/model_doc/rembert.html) model weights according to my team's experimentation strategy during the [**chaii - Hindi and Tamil Question Answering**](https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering) competition. They are listed below with their corresponding public LB score:-
| Huggingface Hub Link | Public LB Score |
| :---: | :---: |
| [**SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii) | 0.724 |
| [**SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii) | 0.723 |
| [**SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii) | 0.737 |
| [**SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii) | 0.725 |
|
SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii | 347358604cb5ffe24adc7fb1cebaa6ad865b57aa | 2021-10-23T04:03:12.000Z | [
"multilingual",
"dataset:Commonlit-Readibility",
"kaggle",
"rembert",
"pytorch",
"question-answering",
"license:cc0-1.0"
] | question-answering | false | SauravMaheshkar | null | SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii | 0 | null | null | 33,577 | ---
thumbnail: https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true
tags:
- kaggle
- rembert
- pytorch
- question-answering
language: multilingual
license: cc0-1.0
inference: false
datasets:
- Commonlit-Readibility
---
<div align = "center">
<img src = "https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true">
</div>
This dataset contains the [**google/rembert**](https://huggingface.co/transformers/model_doc/rembert.html) model weights according to my team's experimentation strategy during the [**chaii - Hindi and Tamil Question Answering**](https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering) competition. They are listed below with their corresponding public LB score:-
| Huggingface Hub Link | Public LB Score |
| :---: | :---: |
| [**SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii) | 0.724 |
| [**SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii) | 0.723 |
| [**SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii) | 0.737 |
| [**SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii) | 0.725 |
|
SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii | e9355ecff169434b7072299a28e86f80ed868a00 | 2021-10-23T04:03:04.000Z | [
"multilingual",
"dataset:Commonlit-Readibility",
"kaggle",
"rembert",
"pytorch",
"question-answering",
"license:cc0-1.0"
] | question-answering | false | SauravMaheshkar | null | SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii | 0 | null | null | 33,578 | ---
thumbnail: https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true
tags:
- kaggle
- rembert
- pytorch
- question-answering
language: multilingual
license: cc0-1.0
inference: false
datasets:
- Commonlit-Readibility
---
<div align = "center">
<img src = "https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true">
</div>
This dataset contains the [**google/rembert**](https://huggingface.co/transformers/model_doc/rembert.html) model weights according to my team's experimentation strategy during the [**chaii - Hindi and Tamil Question Answering**](https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering) competition. They are listed below with their corresponding public LB score:-
| Huggingface Hub Link | Public LB Score |
| :---: | :---: |
| [**SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii) | 0.724 |
| [**SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii) | 0.723 |
| [**SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii) | 0.737 |
| [**SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii) | 0.725 |
|
SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii | 7f7402b700b965285ba86d4b5f3d72bfaf3600a0 | 2021-10-23T04:02:43.000Z | [
"multilingual",
"dataset:Commonlit-Readibility",
"kaggle",
"rembert",
"pytorch",
"question-answering",
"license:cc0-1.0"
] | question-answering | false | SauravMaheshkar | null | SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii | 0 | null | null | 33,579 | ---
thumbnail: https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true
tags:
- kaggle
- rembert
- pytorch
- question-answering
language: multilingual
license: cc0-1.0
inference: false
datasets:
- Commonlit-Readibility
---
<div align = "center">
<img src = "https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true">
</div>
This dataset contains the [**google/rembert**](https://huggingface.co/transformers/model_doc/rembert.html) model weights according to my team's experimentation strategy during the [**chaii - Hindi and Tamil Question Answering**](https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering) competition. They are listed below with their corresponding public LB score:-
| Huggingface Hub Link | Public LB Score |
| :---: | :---: |
| [**SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii) | 0.724 |
| [**SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii) | 0.723 |
| [**SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii) | 0.737 |
| [**SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii) | 0.725 |
|
Souvikcmsa/LogFiBER | 90932316952537cc4357a45de92e0a0c0416b3cc | 2021-12-17T10:05:05.000Z | [
"pytorch"
] | null | false | Souvikcmsa | null | Souvikcmsa/LogFiBER | 0 | null | null | 33,580 | Log FiBER
This model is able to sentence embedding. |
TaahaKazi/joke-generator | 781fa53d7ebe5456fe442f95a9f7edc8c010dd41 | 2021-04-12T09:19:07.000Z | [
"pytorch"
] | null | false | TaahaKazi | null | TaahaKazi/joke-generator | 0 | null | null | 33,581 | Entry not found |
alanakbik/test-serialization | 6f8587f7475d01683dd0fd03d386916c3b3e99b1 | 2021-03-15T21:26:58.000Z | [
"pytorch"
] | null | false | alanakbik | null | alanakbik/test-serialization | 0 | null | null | 33,582 | Entry not found |
lmqg/bart-base-squad-default | 1cc9ce592c2b82e6cda3c17235e54d0bfef8a7af | 2022-05-31T23:55:18.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:squad",
"transformers",
"question generation",
"question answer generation",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-base-squad-default | 0 | null | transformers | 33,583 | ---
language:
- en
tags:
- question generation
- question answer generation
license: mit
datasets:
- squad
metrics:
- bleu
- meteor
- rouge
widget:
- text: "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Example 1"
- text: "Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records."
example_title: "Example 2"
- text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ."
example_title: "Example 3"
---
# T5 finetuned on Question Generation
T5 model for question generation. Please visit [our repository](https://github.com/asahi417/t5-question-generation) for more detail.
|
lmqg/bart-base-squad-no-answer | f3a496de87e1cb94d6f79c61bfafb49d8e6f5b9b | 2022-06-01T00:17:53.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-base-squad-no-answer | 0 | null | transformers | 33,584 | Entry not found |
lmqg/bart-base-squad-no-paragraph | dca517dc5d6a9a2ba8fd127a9fe0546f2d0a29a9 | 2022-06-01T00:21:02.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-base-squad-no-paragraph | 0 | null | transformers | 33,585 | Entry not found |
lmqg/bart-large-squad-no-answer | 45c9b1577c1d1dfd1f039be78759591d1d55e947 | 2022-06-01T00:21:20.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-large-squad-no-answer | 0 | null | transformers | 33,586 | Entry not found |
lmqg/bart-large-squad-no-paragraph | 6fa3379f620c683fde6cfd0f28a40e06754e75cb | 2022-06-01T00:21:29.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-large-squad-no-paragraph | 0 | null | transformers | 33,587 | Entry not found |
lmqg/t5-base-squad-no-paragraph | e0152584479af5a3def2f999346b3af0786ddcf7 | 2022-06-01T00:24:27.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-base-squad-no-paragraph | 0 | null | transformers | 33,588 | Entry not found |
lmqg/t5-large-squad-no-answer | 83ef7f8a79aaf1d2f81012c770ac7fc27a8a648b | 2022-06-01T00:24:45.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-large-squad-no-answer | 0 | null | transformers | 33,589 | Entry not found |
lmqg/t5-large-squad-no-paragraph | 23ee7d53d11ff76d0bf11f58e877e76ccf03de49 | 2022-06-01T00:24:55.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-large-squad-no-paragraph | 0 | null | transformers | 33,590 | Entry not found |
lmqg/t5-small-squad-default | dab42386cb12bff432443607187f9fc90627d6cd | 2022-06-01T00:25:11.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:squad",
"transformers",
"question generation",
"question answer generation",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-small-squad-default | 0 | null | transformers | 33,591 | ---
language:
- en
tags:
- question generation
- question answer generation
license: mit
datasets:
- squad
metrics:
- bleu
- meteor
- rouge
widget:
- text: "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Example 1"
- text: "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records."
example_title: "Example 2"
- text: "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ."
example_title: "Example 3"
---
# T5 finetuned on Question Generation
T5 model for question generation. Please visit [our repository](https://github.com/asahi417/t5-question-generation) for more detail. |
lmqg/t5-small-squad-no-answer | 9187781b7d5510fab0c752ad0b86dd66a2ef8c7c | 2022-06-01T00:25:20.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-small-squad-no-answer | 0 | null | transformers | 33,592 | Entry not found |
tner/xlm-roberta-base-bc5cdr | 6933d1e8269bf51d988d0ec39060b639648390fa | 2021-02-13T00:06:56.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-base-bc5cdr | 0 | null | transformers | 33,593 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-bc5cdr")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-bc5cdr")
``` |
tner/xlm-roberta-base-fin | d7d10d01cbda0f67b200ff41d5d1b0efd6ffe8c3 | 2021-02-12T23:33:59.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-base-fin | 0 | null | transformers | 33,594 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-fin")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-fin")
``` |
tner/xlm-roberta-base-panx-dataset-ar | bf61279d9ffb72fdb1a0cc2b0ab555a38abef46f | 2021-02-12T23:34:15.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-base-panx-dataset-ar | 0 | null | transformers | 33,595 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-ar")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-ar")
``` |
tner/xlm-roberta-base-panx-dataset-en | 21a5c5b2488171f063dd565b3d38ddd1cb1433f7 | 2021-02-13T00:07:38.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-base-panx-dataset-en | 0 | null | transformers | 33,596 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-en")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-en")
``` |
tner/xlm-roberta-base-panx-dataset-es | 89b2ab786debdc0bbfa03cd82802524de413bed5 | 2021-02-12T23:34:35.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-base-panx-dataset-es | 0 | null | transformers | 33,597 |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-es")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-es")
``` |
tner/xlm-roberta-base-panx-dataset-ja | 928240f47cd7269a928ccdb6c349225984fd6e68 | 2021-02-13T00:08:40.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-base-panx-dataset-ja | 0 | null | transformers | 33,598 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-ja")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-ja")
``` |
tner/xlm-roberta-base-panx-dataset-ko | e5be8a06aa3e1f5503abff7217b4648a23b2e4da | 2021-02-12T23:34:47.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-base-panx-dataset-ko | 0 | null | transformers | 33,599 |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-ko")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-ko")
``` |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.