modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Jeevesh8/feather_berts_67 | f34380360b06875350e289ad2b57c68b0048ba69 | 2022-04-20T13:42:17.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/feather_berts_67 | 5 | null | transformers | 17,100 | Entry not found |
Jeevesh8/feather_berts_77 | b01bc9716ee80d9a97749d145e26e841f3691869 | 2022-04-20T13:46:52.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/feather_berts_77 | 5 | null | transformers | 17,101 | Entry not found |
Jeevesh8/feather_berts_85 | dc483f0a95045023946a22e3403a343825e83e32 | 2022-04-20T13:50:29.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/feather_berts_85 | 5 | null | transformers | 17,102 | Entry not found |
Jeevesh8/feather_berts_93 | 73468add0c9bc27e7684f66d1a1b25d98ec3139c | 2022-04-20T13:54:29.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/feather_berts_93 | 5 | null | transformers | 17,103 | Entry not found |
ktangri/autotrain-financial-sentiment-765323474 | c335af66226f04ad16feebee47ac01fca22d1970 | 2022-04-20T14:35:01.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:ktangri/autotrain-data-financial-sentiment",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | ktangri | null | ktangri/autotrain-financial-sentiment-765323474 | 5 | null | transformers | 17,104 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ktangri/autotrain-data-financial-sentiment
co2_eq_emissions: 0.007501354635994803
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 765323474
- CO2 Emissions (in grams): 0.007501354635994803
## Validation Metrics
- Loss: 0.0447433702647686
- Accuracy: 0.9823788546255506
- Macro F1: 0.974405452470854
- Micro F1: 0.9823788546255506
- Weighted F1: 0.9823043153179869
- Macro Precision: 0.978208375548801
- Micro Precision: 0.9823788546255506
- Weighted Precision: 0.9823204968555985
- Macro Recall: 0.9707159078140736
- Micro Recall: 0.9823788546255506
- Weighted Recall: 0.9823788546255506
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ktangri/autotrain-financial-sentiment-765323474
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ktangri/autotrain-financial-sentiment-765323474", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ktangri/autotrain-financial-sentiment-765323474", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
masapasa/deberta_amazon_reviews_v1 | 8f57a60fd5e9eb876c58ed68f6452b2b7558ec3a | 2022-04-20T15:23:24.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | masapasa | null | masapasa/deberta_amazon_reviews_v1 | 5 | null | transformers | 17,105 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: deberta_amazon_reviews_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta_amazon_reviews_v1
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 1
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cpu
- Datasets 1.18.3
- Tokenizers 0.11.0
|
nirmalkumar/gpt2-cric-commentary | 00eb0226526890f3340fda6638c1034eb94a8b18 | 2022-04-20T21:11:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | nirmalkumar | null | nirmalkumar/gpt2-cric-commentary | 5 | null | transformers | 17,106 | Entry not found |
kabelomalapane/model_zu-en_updated | 9fbd9cdc1db3fb72c83e37c667cefe25e9eec7c4 | 2022-04-22T02:55:18.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | kabelomalapane | null | kabelomalapane/model_zu-en_updated | 5 | null | transformers | 17,107 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: model_zu-en_updated
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_zu-en_updated
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-mul-en](https://huggingface.co/Helsinki-NLP/opus-mt-mul-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8306
- Bleu: 27.1218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Plaban81/results | ceb0df33034ad4a5ee993b19397280286af31f19 | 2022-04-21T13:53:54.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | Plaban81 | null | Plaban81/results | 5 | null | transformers | 17,108 | Entry not found |
jackmleitch/distilbert-base-uncased-finetuned-clinc | 1f3549d70bef634b40270a5dc617ff26cd3b9f07 | 2022-04-21T18:22:26.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | jackmleitch | null | jackmleitch/distilbert-base-uncased-finetuned-clinc | 5 | null | transformers | 17,109 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9183870967741935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7702
- Accuracy: 0.9184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2984 | 1.0 | 318 | 3.2941 | 0.7490 |
| 2.6352 | 2.0 | 636 | 1.8755 | 0.8410 |
| 1.5468 | 3.0 | 954 | 1.1587 | 0.8913 |
| 1.0086 | 4.0 | 1272 | 0.8541 | 0.9123 |
| 0.7941 | 5.0 | 1590 | 0.7702 | 0.9184 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
caurdy/wav2vec2-large-960h-lv60-self_MIDIARIES_72H_FT | 053555bab3bb2f1304920ce1c0f3ea11553c791c | 2022-04-22T16:45:56.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"license:afl-3.0"
] | automatic-speech-recognition | false | caurdy | null | caurdy/wav2vec2-large-960h-lv60-self_MIDIARIES_72H_FT | 5 | null | transformers | 17,110 | ---
license: afl-3.0
---
FineTuned wav2vec2 large 960H lv60 self pre-trained facebook model on 72 Hours of MI Diaries Data
WER 13 % -> 9.7% on 20 min test set of MI Diaries audio clips (https://mi-diaries.org/)
### Usage ###
model = Wav2Vec2ForCTC.from_pretrained("caurdy/wav2vec2-large-960h-lv60-self_MIDIARIES_72H_FT")
processor = Wav2Vec2Processor.from_pretrained("caurdy/wav2vec2-large-960h-lv60-self_MIDIARIES_72H_FT")
|
Sarim24/xlm-roberta-base-finetuned-panx-de | 7f0cbff81d4d46c0a7037b321a23e42350ad896c | 2022-04-21T23:12:20.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | Sarim24 | null | Sarim24/xlm-roberta-base-finetuned-panx-de | 5 | null | transformers | 17,111 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.862669465085938
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1374
- F1: 0.8627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2596 | 1.0 | 525 | 0.1571 | 0.8302 |
| 0.1292 | 2.0 | 1050 | 0.1416 | 0.8455 |
| 0.0809 | 3.0 | 1575 | 0.1374 | 0.8627 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
PrasunMishra/prasun | df9b7886dbe7a9a74a27dcef98ff950f1f39a240 | 2022-04-22T05:30:15.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | PrasunMishra | null | PrasunMishra/prasun | 5 | null | transformers | 17,112 | Entry not found |
huggingtweets/it_its_are_are-miyarepostbot-unbridled_id | ff4932f5e957ad108d66ed8cc249abd2d8190ec4 | 2022-04-22T19:04:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/it_its_are_are-miyarepostbot-unbridled_id | 5 | null | transformers | 17,113 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1376263696389914629/_FzhUcTW_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1480214799539740676/S3W8I0f2_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1400304659688878088/Lbb8zMZE_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Sierra Armour 𝔼𝕣𝕚𝕤 & angelicism2727272628 & Miya</div>
<div style="text-align: center; font-size: 14px;">@it_its_are_are-miyarepostbot-unbridled_id</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Sierra Armour 𝔼𝕣𝕚𝕤 & angelicism2727272628 & Miya.
| Data | Sierra Armour 𝔼𝕣𝕚𝕤 | angelicism2727272628 | Miya |
| --- | --- | --- | --- |
| Tweets downloaded | 3146 | 179 | 1840 |
| Retweets | 545 | 28 | 23 |
| Short tweets | 413 | 20 | 214 |
| Tweets kept | 2188 | 131 | 1603 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wlae4njw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @it_its_are_are-miyarepostbot-unbridled_id's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2xs5iik1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2xs5iik1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/it_its_are_are-miyarepostbot-unbridled_id')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
dapang/distilroberta-base-etc-nlp | 13c436b12cbbeba56150e8cde95ff3ba39f15800 | 2022-04-23T04:20:09.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | dapang | null | dapang/distilroberta-base-etc-nlp | 5 | null | transformers | 17,114 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilroberta-base-etc-nlp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-etc-nlp
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0039
- Accuracy: 0.9993
- F1: 0.9993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.740146306575944e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 262 | 0.0025 | 0.9997 | 0.9997 |
| No log | 2.0 | 524 | 0.0039 | 0.9993 | 0.9993 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.0.dev20220422+cu116
- Datasets 2.1.0
- Tokenizers 0.12.1
|
dapang/distilroberta-base-mrl-sym | 285b546d7c2619189c01d461c727350f211c1f7a | 2022-04-23T04:30:29.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | dapang | null | dapang/distilroberta-base-mrl-sym | 5 | null | transformers | 17,115 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilroberta-base-mrl-sym
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mrl-sym
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.740146306575944e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|
| No log | 1.0 | 150 | 0.0001 | 1.0 | 1.0 |
| No log | 2.0 | 300 | 0.0001 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.0.dev20220422+cu116
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mofyrt/bert-base-uncased-finetuned-cola | b931d00c9a1a9425809e9eab7073e0ef290ac975 | 2022-04-23T18:04:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | mofyrt | null | mofyrt/bert-base-uncased-finetuned-cola | 5 | null | transformers | 17,116 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5905946625710334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7445
- Matthews Correlation: 0.5906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4926 | 1.0 | 535 | 0.5155 | 0.4941 |
| 0.2971 | 2.0 | 1070 | 0.5561 | 0.5320 |
| 0.1947 | 3.0 | 1605 | 0.7230 | 0.5677 |
| 0.1293 | 4.0 | 2140 | 0.7445 | 0.5906 |
| 0.0867 | 5.0 | 2675 | 0.8836 | 0.5788 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
brad1141/bertBasev2 | 27638dd491d99b3f885abb2ef746195eb0464c2e | 2022-04-23T14:44:30.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | brad1141 | null | brad1141/bertBasev2 | 5 | null | transformers | 17,117 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bertBasev2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertBasev2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0328
- Precision: 0.9539
- Recall: 0.9707
- F1: 0.9622
- Accuracy: 0.9911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.2004 | 1.0 | 1012 | 0.9504 | 0.2620 | 0.3519 | 0.3004 | 0.6856 |
| 1.0265 | 2.0 | 2024 | 0.6205 | 0.4356 | 0.5161 | 0.4725 | 0.7956 |
| 0.6895 | 3.0 | 3036 | 0.3269 | 0.6694 | 0.7302 | 0.6985 | 0.9044 |
| 0.44 | 4.0 | 4048 | 0.1325 | 0.8356 | 0.9091 | 0.8708 | 0.9667 |
| 0.2585 | 5.0 | 5060 | 0.0717 | 0.9259 | 0.9531 | 0.9393 | 0.9844 |
| 0.1722 | 6.0 | 6072 | 0.0382 | 0.9480 | 0.9619 | 0.9549 | 0.99 |
| 0.0919 | 7.0 | 7084 | 0.0328 | 0.9539 | 0.9707 | 0.9622 | 0.9911 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
SophieTr/RM_incr_lr_v1 | a588e9c54ca3820d201509ebdb9a5f30a7aeb2b8 | 2022-04-24T07:13:52.000Z | [
"pytorch",
"pegasus",
"feature-extraction",
"transformers"
] | feature-extraction | false | SophieTr | null | SophieTr/RM_incr_lr_v1 | 5 | null | transformers | 17,118 | Entry not found |
M-junaid-A/wav2vec-speech-project | 6a03235e6373e43725e6172219e30482a7f7446c | 2022-04-26T06:53:17.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | M-junaid-A | null | M-junaid-A/wav2vec-speech-project | 5 | null | transformers | 17,119 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec-speech-project
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec-speech-project
This model is a fine-tuned version of [kingabzpro/wav2vec2-large-xls-r-300m-Urdu](https://huggingface.co/kingabzpro/wav2vec2-large-xls-r-300m-Urdu) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
gagan3012/ArOCRv2 | 3a7741b48feae17405dc563d43362c60c258389f | 2022-04-26T22:33:08.000Z | [
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"transformers",
"generated_from_trainer",
"model-index"
] | null | false | gagan3012 | null | gagan3012/ArOCRv2 | 5 | null | transformers | 17,120 | ---
tags:
- generated_from_trainer
model-index:
- name: ArOCRv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArOCRv2
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8990
- Cer: 0.0722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9659 | 1.18 | 1000 | 1.6020 | 0.3575 |
| 0.1571 | 2.36 | 2000 | 0.8990 | 0.0722 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.1
- Datasets 2.1.0
- Tokenizers 0.11.6
|
Shashidhar/distilbert-base-uncased-finetuned-squad | 475d800a47dc4b1afbd985eae48abcd2953a6938 | 2022-05-13T00:57:08.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Shashidhar | null | Shashidhar/distilbert-base-uncased-finetuned-squad | 5 | null | transformers | 17,121 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1080
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1205 | 1.0 | 5533 | 1.1080 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
MatthewAlanPow1/distilbert-base-uncased-finetuned-cola | e94c73de559e522a2e0dfc75ae08eccf9cd5cdc9 | 2022-04-25T17:26:25.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | MatthewAlanPow1 | null | MatthewAlanPow1/distilbert-base-uncased-finetuned-cola | 5 | null | transformers | 17,122 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5421747077088894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7994
- Matthews Correlation: 0.5422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.42 | 1.0 | 535 | 0.4631 | 0.5242 |
| 0.2823 | 2.0 | 1070 | 0.5755 | 0.5056 |
| 0.1963 | 3.0 | 1605 | 0.6767 | 0.5478 |
| 0.1441 | 4.0 | 2140 | 0.7742 | 0.5418 |
| 0.1069 | 5.0 | 2675 | 0.7994 | 0.5422 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
swcrazyfan/Kingify-2Way-T5-Large-v1_1 | babc95f246800aa131a2a8db04b709a99b736c05 | 2022-04-26T09:15:58.000Z | [
"pytorch",
"t5",
"text2text-generation",
"english",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | swcrazyfan | null | swcrazyfan/Kingify-2Way-T5-Large-v1_1 | 5 | null | transformers | 17,123 | ---
language: english
tags:
- t5
widget:
- text: "dekingify: "
example_title: "Translate 17th-century English to modern English"
- text: "kingify: "
example_title: "Translate modern English to 17th-century English"
---
# Kingify 2Way
This is a custom AI model that translates modern English into 17th-century English or "King James" English.
## Details of the model
This model is a fine-tuned version of [google/t5-v1_1-large] on a dataset of a modern Bible translation with matching King James Bible verses.
## Intended uses & limitations
At times, despite sharing the same language and general grammatical rules, English from previous centuries can be easily misunderstood. The purpose of this was to explore ways to understand texts from the 17th-century more clearly.
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("swcrazyfan/Kingify-2Way")
model = AutoModelWithLMHead.from_pretrained("swcrazyfan/Kingify-2Way")
```
#### Limitations and bias
- The model is trained on the King James Version of the Bible, so it will work best with Christian-style language (or even clichés).
- Before the 18th and 19th centuries, English spelling was inconsistent. Because of this, the model often does not recognize spellings different from those in the KJV.
- The model was trained on a relatively small amount of data, so it will not be as accurate as a model trained on a larger data set.
## Training data
The data used to train this model is from the New English Translation and the King James Version of the Bible.
## Training procedure
The model was trained on Kaggle using the Hugging Face Transformers library.
### Training hyperparameters
The following hyperparameters were used during training:
- num_train_epochs: 4
- learning_rate: 5e-04
- train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
## Eval results
The model was evaluated using a human test. A human was asked to evaluate the translation quality of the model. The human was not told which sentences were translated by the model and which sentences were written by a human.
## BibTeX entry and citation info
```bibtex
@inproceedings{,
title={Kingify 2Way},
author={Joshua Kaufmann},
year={2022},
url={https://huggingface.co/swcrazyfan/Kingify-2Way-T5-Large-v1_1}
}
``` |
Cheatham/xlm-roberta-large-finetuned-dA-001 | 07d279c89a998a95d2b96d8ffc5d94c12f053892 | 2022-04-25T12:35:46.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | Cheatham | null | Cheatham/xlm-roberta-large-finetuned-dA-001 | 5 | null | transformers | 17,124 | Entry not found |
bullmount/quanIta_t5 | 7425406d2c6aa253a9f52f8f1112ebcf110ad80e | 2022-04-26T05:32:32.000Z | [
"pytorch",
"t5",
"text2text-generation",
"it",
"transformers",
"text2text_generation",
"question_answering",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | bullmount | null | bullmount/quanIta_t5 | 5 | null | transformers | 17,125 | ---
tags:
- text2text_generation
- question_answering
language:
- it
model-index:
- name: quanIta_t5
results: []
widget:
- text: "Quante torri ha Bologna? La torre degli Asinelli è una delle cosiddette due torri di Bologna, simbolo della città, situate in piazza di porta Ravegnana, all'incrocio tra le antiche strade San Donato (ora via Zamboni), San Vitale, Maggiore e Castiglione. Eretta, secondo la tradizione, fra il 1109 e il 1119 dal nobile Gherardo Asinelli, la torre è alta 97,20 metri, pende verso ovest per 2,23 metri e presenta all'interno una scalinata composta da 498 gradini. Ancora non si può dire con certezza quando e da chi fu costruita la torre degli Asinelli. Si presume che la torre debba il proprio nome a Gherardo Asinelli, il nobile cavaliere di fazione ghibellina al quale se ne attribuisce la costruzione, iniziata secondo una consolidata tradizione l'11 ottobre 1109 e terminata dieci anni dopo, nel 1119."
- text: "Chi costruì la torre degli Asinelli? La torre degli Asinelli è una delle cosiddette due torri di Bologna, simbolo della città, situate in piazza di porta Ravegnana, all'incrocio tra le antiche strade San Donato (ora via Zamboni), San Vitale, Maggiore e Castiglione. Eretta, secondo la tradizione, fra il 1109 e il 1119 dal nobile Gherardo Asinelli, la torre è alta 97,20 metri, pende verso ovest per 2,23 metri e presenta all'interno una scalinata composta da 498 gradini. Ancora non si può dire con certezza quando e da chi fu costruita la torre degli Asinelli. Si presume che la torre debba il proprio nome a Gherardo Asinelli, il nobile cavaliere di fazione ghibellina al quale se ne attribuisce la costruzione, iniziata secondo una consolidata tradizione l'11 ottobre 1109 e terminata dieci anni dopo, nel 1119."
- text: "Chi è l'autore della Gioconda? La torre degli Asinelli è una delle cosiddette due torri di Bologna, simbolo della città, situate in piazza di porta Ravegnana, all'incrocio tra le antiche strade San Donato (ora via Zamboni), San Vitale, Maggiore e Castiglione. Eretta, secondo la tradizione, fra il 1109 e il 1119 dal nobile Gherardo Asinelli, la torre è alta 97,20 metri, pende verso ovest per 2,23 metri e presenta all'interno una scalinata composta da 498 gradini. Ancora non si può dire con certezza quando e da chi fu costruita la torre degli Asinelli. Si presume che la torre debba il proprio nome a Gherardo Asinelli, il nobile cavaliere di fazione ghibellina al quale se ne attribuisce la costruzione, iniziata secondo una consolidata tradizione l'11 ottobre 1109 e terminata dieci anni dopo, nel 1119."
- text: "Chi fece accostare Seneca agli insegnamenti di Pitagora?
Seneca seguì molto intensamente gli insegnamenti dei maestri, che esercitarono su di lui un profondo influsso sia con la parola sia con l'esempio di una vita vissuta in coerenza con gli ideali professati. Da Attalo imparò i principi dello stoicismo e l'abitudine alle pratiche ascetiche. Da Sozione, oltre ad apprendere i principi delle dottrine di Pitagora, fu avviato per qualche tempo verso la pratica vegetariana; venne distolto però dal padre che non amava la filosofia e dal fatto che l'imperatore Tiberio proibisse di seguire consuetudini di vita non romane."
---
This model is a fine-tuned version of [gsarti/it5-base](https://huggingface.co/gsarti/it5-base) on [Thoroughly Cleaned Italian mC4 Corpus](https://huggingface.co/datasets/gsarti/clean_mc4_it) (~41B words, ~275GB).<br/>
This is an mt5-based Question Answering model for the Italian language. <br/>
Training is done on translated subset of SQuAD 2.0 dataset (of about 100k questions).<br/>
Thus, this model not only attempts to answer questions through reading comprehension, but also refrains when presented with a question that cannot be answered based on the paragraph provided.
You can test the model by entering question + context like the string shown below:
```
In quale anno si è verificato il terremoto nel Sichuan?
Il terremoto del Sichuan del 2008 o il terremoto del Gran Sichuan, misurato a 8.0 Ms e 7.9 Mw, e si è verificato alle 02:28:01 PM China Standard Time all' epicentro (06:28:01 UTC) il 12 maggio nella provincia del Sichuan, ha ucciso 69.197 persone e lasciato 18.222 dispersi.
```
The train achieves the following results:
- EM: 78.69
- F1: 84.69
- rouge1: precision=0.862, recall=0.849, fmeasure=0.845
- rouge2: precision=0.309, recall=0.300, fmeasure=0.298
- rougeL: precision=0.862, recall=0.849, fmeasure=0.845
- rougeLsum: precision=0.862, recall=0.849, fmeasure=0.845
|
Cheatham/xlm-roberta-large-finetuned-dAB-001 | 904460ee4f12700e795b30296348f15e8eb804f2 | 2022-04-25T18:06:58.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | Cheatham | null | Cheatham/xlm-roberta-large-finetuned-dAB-001 | 5 | null | transformers | 17,126 | Entry not found |
anshr/distilgpt2_reward_model_03 | 3138ff252f01e83f746e83b95ccf6f208182b84d | 2022-04-26T01:02:21.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | anshr | null | anshr/distilgpt2_reward_model_03 | 5 | null | transformers | 17,127 | Entry not found |
crcb/isear_bert | 64b3cc2932cab99acd4e47f6a506d7533f005749 | 2022-04-26T03:14:10.000Z | [
"pytorch",
"roberta",
"text-classification",
"unk",
"dataset:crcb/autotrain-data-isear_bert",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | crcb | null | crcb/isear_bert | 5 | null | transformers | 17,128 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- crcb/autotrain-data-isear_bert
co2_eq_emissions: 0.026027055434994496
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 786224257
- CO2 Emissions (in grams): 0.026027055434994496
## Validation Metrics
- Loss: 0.8348872065544128
- Accuracy: 0.7272727272727273
- Macro F1: 0.7230931630686932
- Micro F1: 0.7272727272727273
- Weighted F1: 0.7236599456423468
- Macro Precision: 0.7328252157220334
- Micro Precision: 0.7272727272727273
- Weighted Precision: 0.7336599708829821
- Macro Recall: 0.7270448163292604
- Micro Recall: 0.7272727272727273
- Weighted Recall: 0.7272727272727273
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/crcb/autotrain-isear_bert-786224257
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("crcb/autotrain-isear_bert-786224257", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("crcb/autotrain-isear_bert-786224257", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
stefan-it/it5-efficient-small-el32 | 15c23b9731b7a71e714e5a06d9de0ecf662b2c94 | 2022-04-26T07:27:25.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | stefan-it | null | stefan-it/it5-efficient-small-el32 | 5 | 2 | transformers | 17,129 | ---
license: mit
---
|
manueltonneau/bert-twitter-es-lost-job | f86a65eafd0949b374b402952cc30d929e0819c9 | 2022-04-26T16:04:49.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"arxiv:2203.09178",
"transformers"
] | text-classification | false | manueltonneau | null | manueltonneau/bert-twitter-es-lost-job | 5 | null | transformers | 17,130 | ---
language: es # <-- my language
widget:
- text: "Hoy perdí mi trabajo..."
---
# Detection of employment status disclosures on Twitter
## Model main characteristics:
- class: Lost Job (1), else (0)
- country: MX
- language: Spanish
- architecture: BERT base
## Model description
This model is a version of `dccuchile/bert-base-spanish-wwm-cased` finetuned to recognize Spanish tweets where a user mentions that she lost her job in the past month. It was trained on Spanish tweets from users based in Mexico. The task is framed as a binary classification problem with:
- the positive class referring to tweets mentioning that a user recently lost her job (label=1)
- the negative class referring to all other tweets (label=0)
## Resources
The dataset of Spanish tweets on which this classifier was trained is open-sourced [here](https://github.com/manueltonneau/twitter-unemployment).
Details on the performance can be found in our [ACL 2022 paper](https://arxiv.org/abs/2203.09178).
## Citation
If you find this model useful, please cite our paper (citation to come soon). |
UT/BRTW_DEBIAS | 87bfb8a61e6413436ee2f3a357b5d63d1ff4db8f | 2022-04-27T08:54:01.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | UT | null | UT/BRTW_DEBIAS | 5 | null | transformers | 17,131 | Entry not found |
caush/Clickbait2 | 28940b18770367d40953ad17123758727f900139 | 2022-04-26T21:15:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | caush | null | caush/Clickbait2 | 5 | null | transformers | 17,132 | ---
tags:
- generated_from_trainer
model-index:
- name: Clickbait2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Clickbait2
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.05 | 50 | 0.0213 |
| No log | 0.09 | 100 | 0.0213 |
| No log | 0.14 | 150 | 0.0213 |
| No log | 0.18 | 200 | 0.0216 |
| No log | 0.23 | 250 | 0.0214 |
| No log | 0.27 | 300 | 0.0212 |
| No log | 0.32 | 350 | 0.0214 |
| No log | 0.36 | 400 | 0.0212 |
| No log | 0.41 | 450 | 0.0218 |
| 0.0219 | 0.46 | 500 | 0.0219 |
| 0.0219 | 0.5 | 550 | 0.0214 |
| 0.0219 | 0.55 | 600 | 0.0216 |
| 0.0219 | 0.59 | 650 | 0.0217 |
| 0.0219 | 0.64 | 700 | 0.0214 |
| 0.0219 | 0.68 | 750 | 0.0214 |
| 0.0219 | 0.73 | 800 | 0.0214 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.1.0
- Tokenizers 0.12.1
|
manueltonneau/bert-twitter-es-job-offer | f32aa9dd33ad6a0db5ed0c300630fc0cfea2a94d | 2022-04-26T20:10:22.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"arxiv:2203.09178",
"transformers"
] | text-classification | false | manueltonneau | null | manueltonneau/bert-twitter-es-job-offer | 5 | null | transformers | 17,133 | ---
language: es # <-- my language
widget:
- text: "Difunde a contactos: #trabajo: Cajeros Zona Taxqueña- Turnos fijos. Oaxaca"
---
# Detection of employment status disclosures on Twitter
## Model main characteristics:
- class: Job Offer (1), else (0)
- country: MX
- language: Spanish
- architecture: BERT base
## Model description
This model is a version of `dccuchile/bert-base-spanish-wwm-cased` finetuned to recognize Spanish tweets containing job offers. It was trained on Spanish tweets from users based in Mexico. The task is framed as a binary classification problem with:
- the positive class referring to tweets containing job offers (label=1)
- the negative class referring to all other tweets (label=0)
## Resources
The dataset of Spanish tweets on which this classifier was trained is open-sourced [here](https://github.com/manueltonneau/twitter-unemployment).
Details on the performance can be found in our [ACL 2022 paper](https://arxiv.org/abs/2203.09178).
## Citation
If you find this model useful, please cite our paper (citation to come soon). |
huggingtweets/ai_curio_bot | 9b0b9a3c6031e889f63bb65f3e9c3abbe482759b | 2022-04-27T09:37:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/ai_curio_bot | 5 | null | transformers | 17,134 | ---
language: en
thumbnail: http://www.huggingtweets.com/ai_curio_bot/1651052269778/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1516142458660401154/YdxpLcQj_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ai_curio_bot (ADMISSIONS OPEN)(GUIDED DIFFUSION)</div>
<div style="text-align: center; font-size: 14px;">@ai_curio_bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ai_curio_bot (ADMISSIONS OPEN)(GUIDED DIFFUSION).
| Data | ai_curio_bot (ADMISSIONS OPEN)(GUIDED DIFFUSION) |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 51 |
| Short tweets | 716 |
| Tweets kept | 2483 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3os17v54/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ai_curio_bot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2krcmz6f) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2krcmz6f/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ai_curio_bot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
pistachiocow/product_description_generator_bad | 1184e697a236f95352a51dc20b6c1bca48cdd956 | 2022-04-27T14:07:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | pistachiocow | null | pistachiocow/product_description_generator_bad | 5 | null | transformers | 17,135 | Entry not found |
Brawl/UKRI_DistilBERT | d9477c2b13282ba58b18a0cf89dd0121d131fdf8 | 2022-04-27T15:54:51.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Brawl | null | Brawl/UKRI_DistilBERT | 5 | null | transformers | 17,136 | Entry not found |
LiYuan/amazon-cross-encoder | f8018f8b5c6ed3c18cec26cc6432d76f18d68b7e | 2022-04-27T18:36:36.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | LiYuan | null | LiYuan/amazon-cross-encoder | 5 | null | transformers | 17,137 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-mnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mnli
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8244
- Accuracy: 0.6617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8981 | 1.0 | 35702 | 0.8662 | 0.6371 |
| 0.7837 | 2.0 | 71404 | 0.8244 | 0.6617 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
gagan3012/ArOCRv4 | efe7fcf23bf8a053fab2b5ab8831e55816808ae4 | 2022-04-27T20:23:52.000Z | [
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"transformers",
"generated_from_trainer",
"model-index"
] | null | false | gagan3012 | null | gagan3012/ArOCRv4 | 5 | null | transformers | 17,138 | ---
tags:
- generated_from_trainer
model-index:
- name: ArOCRv4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArOCRv4
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5811
- Cer: 0.1249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 3.103 | 1.18 | 1000 | 8.0852 | 11.5974 |
| 1.2535 | 2.36 | 2000 | 2.0400 | 0.4904 |
| 0.5682 | 3.55 | 3000 | 1.9336 | 0.2145 |
| 0.3038 | 4.73 | 4000 | 1.5811 | 0.1249 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.1
- Datasets 2.1.0
- Tokenizers 0.11.6
|
Mim/pro-cell-expert | 904f1325b1236902ae279da1babd8887366dd8bc | 2022-04-29T11:36:58.000Z | [
"pytorch",
"bert",
"text-classification",
"unk",
"dataset:Mim/autotrain-data-procell-expert",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | Mim | null | Mim/pro-cell-expert | 5 | null | transformers | 17,139 | ---
tags: autotrain
language: unk
widget:
- text: "ACE2 overexpression in AAV cell lines"
datasets:
- Mim/autotrain-data-procell-expert
co2_eq_emissions: 0.004814823138367317
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 800724769
- CO2 Emissions (in grams): 0.004814823138367317
## Validation Metrics
- Loss: 0.4749071002006531
- Accuracy: 0.9
- Precision: 0.8928571428571429
- Recall: 0.9615384615384616
- AUC: 0.9065934065934066
- F1: 0.9259259259259259
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Mim/autotrain-procell-expert-800724769
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Mim/autotrain-procell-expert-800724769", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Mim/autotrain-procell-expert-800724769", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
TehranNLP-org/electra-base-hateXplain | 63e6e822a1e229a4ff3a4291f57f2b93104714d8 | 2022-05-03T17:00:31.000Z | [
"pytorch",
"electra",
"text-classification",
"en",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | TehranNLP-org | null | TehranNLP-org/electra-base-hateXplain | 5 | null | transformers | 17,140 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SEED0042
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: HATEXPLAIN
type: ''
args: hatexplain
metrics:
- name: Accuracy
type: accuracy
value: 0.4162330905306972
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SEED0042
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the HATEXPLAIN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7667
- Accuracy: 0.4162
- Accuracy 0: 0.8145
- Accuracy 1: 0.1895
- Accuracy 2: 0.3084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: not_parallel
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 150
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Accuracy 0 | Accuracy 1 | Accuracy 2 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:----------:|:----------:|
| No log | 1.0 | 481 | 0.7431 | 0.4152 | 0.7707 | 0.1805 | 0.3650 |
| No log | 2.0 | 962 | 0.7346 | 0.4152 | 0.8010 | 0.2190 | 0.2774 |
| No log | 3.0 | 1443 | 0.7667 | 0.4162 | 0.8145 | 0.1895 | 0.3084 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.11.6
|
astremo/JAINU | 913b2349d1ea03f69bc95c690fff80ddbefbe2c6 | 2022-05-22T05:51:12.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ja",
"ain",
"transformers",
"japanese",
"ainu",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | astremo | null | astremo/JAINU | 5 | 4 | transformers | 17,141 | ---
language:
- ja
- ain
license: cc-by-4.0
tags:
- japanese
- ainu
---
# JAINU-Model (T5 fine-tuned model)
JAINU is a Japanese - Ainu language machine translation model.
⚠️ Attention! The model is still experimental and needs to be refined!
# Examples
| input | output|
|---|---|
|こんにちは|イランカラプテ|
|ありがとうございます|イヤイライケレ|
|熊は神ですか|キムンカムイアナクカムイネヤ?|
|熊は怖いのか|キムンカムイアナクアシトマプネヤ?|
|フクロウは鳥です|イソサンケカムイアナクチカプネ|
|分かりません!|ケラムシカレ!|
|勉強した?|ヤイホノッカエキプネヤ?|
|してないです|クキカソモキ|
|さようなら|アプンノオカヤン|
# References
t5 japanese pre-trained model: sonoisa t5-base-japanese (https://huggingface.co/sonoisa/t5-base-japanese)
# License
Shield: [![CC BY 4.0][cc-by-shield]][cc-by]
This work is licensed under a
[Creative Commons Attribution 4.0 International License][cc-by].
[![CC BY 4.0][cc-by-image]][cc-by]
[cc-by]: http://creativecommons.org/licenses/by/4.0/
[cc-by-image]: https://i.creativecommons.org/l/by/4.0/88x31.png
[cc-by-shield]: https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg
|
PHISSTOOD/codet5-small-code-summarization-python | 432cdede8e92ce45969eea7450f9733157688424 | 2022-05-01T03:55:05.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PHISSTOOD | null | PHISSTOOD/codet5-small-code-summarization-python | 5 | null | transformers | 17,142 | Entry not found |
behroz/sp_proj | 3f56ff0c0d3dd20dd7d2a6e9e59dda534ce8e98b | 2022-05-06T20:39:46.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | behroz | null | behroz/sp_proj | 5 | null | transformers | 17,143 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: sp_proj
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sp_proj
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.10.3
|
Raffay/my_final_wav2vec2-urdu-asr-project | 29ba84945453b41bb17b9b499ce2122c85e48416 | 2022-05-01T16:09:24.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | Raffay | null | Raffay/my_final_wav2vec2-urdu-asr-project | 5 | null | transformers | 17,144 | ---
tags:
- generated_from_trainer
model-index:
- name: my_final_wav2vec2-urdu-asr-project
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_final_wav2vec2-urdu-asr-project
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4680
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 7.8981 | 1.41 | 200 | 5.5809 | 1.0 |
| 5.254 | 2.82 | 400 | 5.4720 | 1.0 |
| 5.2209 | 4.23 | 600 | 5.4862 | 1.0 |
| 5.256 | 5.63 | 800 | 5.4716 | 1.0 |
| 5.1244 | 7.04 | 1000 | 5.4912 | 1.0 |
| 5.0641 | 8.45 | 1200 | 5.4797 | 1.0 |
| 5.0923 | 9.86 | 1400 | 5.5290 | 1.0 |
| 5.0166 | 11.27 | 1600 | 5.4722 | 1.0 |
| 5.1251 | 12.68 | 1800 | 5.4690 | 1.0 |
| 5.0201 | 14.08 | 2000 | 5.4684 | 1.0 |
| 5.1285 | 15.49 | 2200 | 5.4745 | 1.0 |
| 5.0853 | 16.9 | 2400 | 5.4734 | 1.0 |
| 5.0112 | 18.31 | 2600 | 5.4668 | 1.0 |
| 5.0372 | 19.72 | 2800 | 5.4680 | 1.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Muennighoff/t5-small-finetuned-xsum-512 | 772370a0af3b8cc51b4bc30a6d3af4913a495385 | 2022-05-01T10:55:33.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Muennighoff | null | Muennighoff/t5-small-finetuned-xsum-512 | 5 | null | transformers | 17,145 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum-512
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.8448
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum-512
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4706
- Rouge1: 28.8448
- Rouge2: 7.9819
- Rougel: 22.8686
- Rougelsum: 22.8754
- Gen Len: 18.7654
T5, zero-shot on the same evaluation set:
`{'rouge1': 19.2304, 'rouge2': 2.5842, 'rougeL': 13.9683, 'rougeLsum': 15.516}`
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7057 | 1.0 | 7854 | 2.4706 | 28.8448 | 7.9819 | 22.8686 | 22.8754 | 18.7654 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2
- Datasets 2.1.0
- Tokenizers 0.12.1
|
charly/autotrain-sentiment-4-812425472 | 87a7f46843bc10dd154132eaa6ba81a7dba882c8 | 2022-05-02T00:38:00.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:charly/autotrain-data-sentiment-4",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | charly | null | charly/autotrain-sentiment-4-812425472 | 5 | null | transformers | 17,146 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- charly/autotrain-data-sentiment-4
co2_eq_emissions: 0.007597570744740809
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 812425472
- CO2 Emissions (in grams): 0.007597570744740809
## Validation Metrics
- Loss: 0.5105093121528625
- Accuracy: 0.8268156424581006
- Macro F1: 0.6020923520923521
- Micro F1: 0.8268156424581006
- Weighted F1: 0.8021395116367184
- Macro Precision: 0.5907986111111111
- Micro Precision: 0.8268156424581006
- Weighted Precision: 0.7792248603351954
- Macro Recall: 0.6141625496464206
- Micro Recall: 0.8268156424581006
- Weighted Recall: 0.8268156424581006
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/charly/autotrain-sentiment-4-812425472
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("charly/autotrain-sentiment-4-812425472", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("charly/autotrain-sentiment-4-812425472", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
DioLiu/distilbert-base-uncased-finetuned-sst2-newdata | 215b4a8854a277a1128063ac78318ac6af22ab95 | 2022-05-02T12:40:09.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | DioLiu | null | DioLiu/distilbert-base-uncased-finetuned-sst2-newdata | 5 | null | transformers | 17,147 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2-newdata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2-newdata
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0588
- Accuracy: 0.9911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0543 | 1.0 | 1116 | 0.0307 | 0.9911 |
| 0.0235 | 2.0 | 2232 | 0.0372 | 0.9911 |
| 0.0102 | 3.0 | 3348 | 0.0486 | 0.9914 |
| 0.0003 | 4.0 | 4464 | 0.0563 | 0.9914 |
| 0.0008 | 5.0 | 5580 | 0.0588 | 0.9911 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ali2066/DistilBERTFINAL_ctxSentence_TRAIN_editorials_TEST_NULL_second_train_set_null_False | 26f6448d463617db95e50e7a0e0c7de3f6a570df | 2022-05-02T13:43:39.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | ali2066 | null | ali2066/DistilBERTFINAL_ctxSentence_TRAIN_editorials_TEST_NULL_second_train_set_null_False | 5 | null | transformers | 17,148 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: DistilBERTFINAL_ctxSentence_TRAIN_editorials_TEST_NULL_second_train_set_null_False
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERTFINAL_ctxSentence_TRAIN_editorials_TEST_NULL_second_train_set_null_False
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4527
- Precision: 0.2844
- Recall: 0.9676
- F1: 0.4395
- Accuracy: 0.2991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 166 | 0.1044 | 0.9742 | 1.0 | 0.9869 | 0.9742 |
| No log | 2.0 | 332 | 0.1269 | 0.9742 | 1.0 | 0.9869 | 0.9742 |
| No log | 3.0 | 498 | 0.1028 | 0.9742 | 1.0 | 0.9869 | 0.9742 |
| 0.0947 | 4.0 | 664 | 0.0836 | 0.9826 | 0.9971 | 0.9898 | 0.9799 |
| 0.0947 | 5.0 | 830 | 0.0884 | 0.9854 | 0.9912 | 0.9883 | 0.9771 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/DistilBERTFINAL_ctxSentence_TRAIN_all_TEST_NULL_second_train_set_null_False | 58612b52e712c1f0d2369ca0a91d4b0a023be80f | 2022-05-02T14:00:18.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | ali2066 | null | ali2066/DistilBERTFINAL_ctxSentence_TRAIN_all_TEST_NULL_second_train_set_null_False | 5 | null | transformers | 17,149 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: DistilBERTFINAL_ctxSentence_TRAIN_all_TEST_NULL_second_train_set_null_False
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERTFINAL_ctxSentence_TRAIN_all_TEST_NULL_second_train_set_null_False
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0557
- Precision: 0.9930
- Recall: 0.9878
- F1: 0.9904
- Accuracy: 0.9814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 479 | 0.3334 | 0.9041 | 0.9041 | 0.9041 | 0.8550 |
| 0.3756 | 2.0 | 958 | 0.3095 | 0.8991 | 0.9251 | 0.9119 | 0.8649 |
| 0.2653 | 3.0 | 1437 | 0.3603 | 0.8929 | 0.9527 | 0.9218 | 0.8779 |
| 0.1991 | 4.0 | 1916 | 0.3907 | 0.8919 | 0.9540 | 0.9219 | 0.8779 |
| 0.1586 | 5.0 | 2395 | 0.3642 | 0.9070 | 0.9356 | 0.9211 | 0.8788 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
kurama/bert-finetuned-ner | b81a0fbe1b4dafb25c182b6a01bbb3c650de0ce6 | 2022-05-02T14:02:58.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | kurama | null | kurama/bert-finetuned-ner | 5 | null | transformers | 17,150 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9321865696328151
- name: Recall
type: recall
value: 0.9485021878155503
- name: F1
type: f1
value: 0.9402736069402736
- name: Accuracy
type: accuracy
value: 0.9860187201977983
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0617
- Precision: 0.9322
- Recall: 0.9485
- F1: 0.9403
- Accuracy: 0.9860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0831 | 1.0 | 1756 | 0.0652 | 0.9213 | 0.9392 | 0.9302 | 0.9835 |
| 0.0413 | 2.0 | 3512 | 0.0567 | 0.9292 | 0.9495 | 0.9392 | 0.9861 |
| 0.0192 | 3.0 | 5268 | 0.0617 | 0.9322 | 0.9485 | 0.9403 | 0.9860 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ali2066/DistilBERT_FINAL_ctxSentence_TRAIN_editorials_TEST_NULL_second_train_set_null_False | 5ff1da3d87729b255bee9476168bbd774b764f7d | 2022-05-02T18:29:59.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ali2066 | null | ali2066/DistilBERT_FINAL_ctxSentence_TRAIN_editorials_TEST_NULL_second_train_set_null_False | 5 | null | transformers | 17,151 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: DistilBERT_FINAL_ctxSentence_TRAIN_editorials_TEST_NULL_second_train_set_null_False
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERT_FINAL_ctxSentence_TRAIN_editorials_TEST_NULL_second_train_set_null_False
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8119
- Precision: 0.2752
- Recall: 0.9522
- F1: 0.4270
- Accuracy: 0.2849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 166 | 0.0726 | 0.9827 | 1.0 | 0.9913 | 0.9828 |
| No log | 2.0 | 332 | 0.0569 | 0.9827 | 1.0 | 0.9913 | 0.9828 |
| No log | 3.0 | 498 | 0.0434 | 0.9884 | 1.0 | 0.9942 | 0.9885 |
| 0.1021 | 4.0 | 664 | 0.0505 | 0.9884 | 1.0 | 0.9942 | 0.9885 |
| 0.1021 | 5.0 | 830 | 0.0472 | 0.9884 | 1.0 | 0.9942 | 0.9885 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
maesneako/gpt2-fr_paco-cheese_e3 | cf332f315fe883965766052c6cb6552f3b9afbc1 | 2022-05-02T20:06:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | maesneako | null | maesneako/gpt2-fr_paco-cheese_e3 | 5 | null | transformers | 17,152 | Entry not found |
mdroth/bert-finetuned-ner | d10310f3cb882bacbb31bd8d08d4f85d4700a75a | 2022-05-26T18:32:46.000Z | [
"pytorch",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | mdroth | null | mdroth/bert-finetuned-ner | 5 | null | transformers | 17,153 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9331020812685827
- name: Recall
type: recall
value: 0.9506900033658701
- name: F1
type: f1
value: 0.9418139379793263
- name: Accuracy
type: accuracy
value: 0.9865926885265203
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0589
- Precision: 0.9331
- Recall: 0.9507
- F1: 0.9418
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0857 | 1.0 | 1756 | 0.0621 | 0.9181 | 0.9382 | 0.9281 | 0.9836 |
| 0.0308 | 2.0 | 3512 | 0.0611 | 0.9228 | 0.9458 | 0.9342 | 0.9846 |
| 0.0223 | 3.0 | 5268 | 0.0589 | 0.9331 | 0.9507 | 0.9418 | 0.9866 |
### Framework versions
- Transformers 4.12.0
- Pytorch 1.9.0+cu111
- Datasets 2.0.0
- Tokenizers 0.10.3
|
chebmarcel/sun | 84df9fbf3758b30ab1ade57d0a558970aeaf2e51 | 2022-05-03T12:41:23.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | chebmarcel | null | chebmarcel/sun | 5 | null | transformers | 17,154 | Entry not found |
moma1820/sen_pair_cluster4 | 4c0806ae6c529dd985156818239a18b772be85c9 | 2022-05-03T12:34:53.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | moma1820 | null | moma1820/sen_pair_cluster4 | 5 | null | transformers | 17,155 | Entry not found |
TweebankNLP/bertweet-tb2-pos-tagging | 41aaa15a9052135239fcb4b3f8cc77f686fc63be | 2022-05-05T00:23:38.000Z | [
"pytorch",
"roberta",
"token-classification",
"arxiv:2201.07281",
"transformers",
"license:cc-by-nc-4.0",
"autotrain_compatible"
] | token-classification | false | TweebankNLP | null | TweebankNLP/bertweet-tb2-pos-tagging | 5 | null | transformers | 17,156 | ---
license: cc-by-nc-4.0
---
## Model Specification
- This is a **baseline Twitter POS tagging model (with 95.21\% Accuracy)** on Tweebank V2's NER benchmark (also called `Tweebank-NER`), trained on the Tweebank-NER training data.
- **If you are looking for the SOTA Twitter POS tagger**, please go to this [HuggingFace hub link](https://huggingface.co/TweebankNLP/bertweet-tb2_ewt-pos-tagging).
- For more details about the `TweebankNLP` project, please refer to this [our paper](https://arxiv.org/pdf/2201.07281.pdf) and [github](https://github.com/social-machines/TweebankNLP) page.
- In the paper, it is referred as `HuggingFace-BERTweet (TB2)` in the POS table.
## How to use the model
- **PRE-PROCESSING**: when you apply the model on tweets, please make sure that tweets are preprocessed by the [TweetTokenizer](https://github.com/VinAIResearch/BERTweet/blob/master/TweetNormalizer.py) to get the best performance.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("TweebankNLP/bertweet-tb2-pos-tagging")
model = AutoModelForTokenClassification.from_pretrained("TweebankNLP/bertweet-tb2-pos-tagging")
```
## References
If you use this repository in your research, please kindly cite [our paper](https://arxiv.org/pdf/2201.07281.pdf):
```bibtex
@article{jiang2022tweetnlp,
title={Annotating the Tweebank Corpus on Named Entity Recognition and Building NLP Models for Social Media Analysis},
author={Jiang, Hang and Hua, Yining and Beeferman, Doug and Roy, Deb},
journal={In Proceedings of the 13th Language Resources and Evaluation Conference (LREC)},
year={2022}
}
``` |
enimai/opus-mt-en-ru-finetuned-en-to-ru | f25fdac198c1137b384d0d838ae6309e4397bb31 | 2022-05-03T16:56:39.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | enimai | null | enimai/opus-mt-en-ru-finetuned-en-to-ru | 5 | null | transformers | 17,157 | ---
license: apache-2.0
---
|
Nakul24/RoBERTa-emotion-extraction | 43abab03b92f84ca618992ea26084d641b294c5e | 2022-05-04T16:23:29.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Nakul24 | null | Nakul24/RoBERTa-emotion-extraction | 5 | 1 | transformers | 17,158 | Entry not found |
DioLiu/distilbert-base-uncased-finetuned-sst2-shake-wiki | 9a10bf42d90374928f1c7f1e1f21302a99cc3112 | 2022-05-05T06:39:28.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | DioLiu | null | DioLiu/distilbert-base-uncased-finetuned-sst2-shake-wiki | 5 | null | transformers | 17,159 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2-shake-wiki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2-shake-wiki
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0096
- Accuracy: 0.9994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.001 | 1.0 | 5029 | 0.0120 | 0.9988 |
| 0.0017 | 2.0 | 10058 | 0.0028 | 0.9996 |
| 0.0 | 3.0 | 15087 | 0.0094 | 0.9992 |
| 0.0 | 4.0 | 20116 | 0.0091 | 0.9994 |
| 0.0 | 5.0 | 25145 | 0.0096 | 0.9994 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
cradle-bio/thermo-predictor-thermo-evotuning-prot_bert | a46e54855eef78a766b3bb83bf7f5e803d3b0f98 | 2022-05-06T12:46:22.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | cradle-bio | null | cradle-bio/thermo-predictor-thermo-evotuning-prot_bert | 5 | null | transformers | 17,160 | ---
tags:
- generated_from_trainer
metrics:
- spearmanr
model-index:
- name: thermo-predictor-thermo-evotuning-prot_bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# thermo-predictor-thermo-evotuning-prot_bert
This model is a fine-tuned version of [thundaa/thermo-evotuning-prot_bert](https://huggingface.co/thundaa/thermo-evotuning-prot_bert) on the cradle-bio/tape-thermostability dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1617
- Spearmanr: 0.6914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 16384
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 0.4734 | 0.68 | 2 | 0.3146 | 0.3359 |
| 0.4392 | 1.68 | 4 | 0.2936 | 0.3407 |
| 0.4034 | 2.68 | 6 | 0.2633 | 0.3696 |
| 0.3669 | 3.68 | 8 | 0.2437 | 0.3903 |
| 0.3496 | 4.68 | 10 | 0.2377 | 0.4102 |
| 0.3351 | 5.68 | 12 | 0.2285 | 0.4204 |
| 0.3289 | 6.68 | 14 | 0.2267 | 0.4180 |
| 0.3267 | 7.68 | 16 | 0.2258 | 0.4242 |
| 0.3177 | 8.68 | 18 | 0.2206 | 0.4295 |
| 0.3116 | 9.68 | 20 | 0.2150 | 0.4365 |
| 0.3039 | 10.68 | 22 | 0.2115 | 0.4365 |
| 0.2985 | 11.68 | 24 | 0.2062 | 0.4469 |
| 0.2927 | 12.68 | 26 | 0.2045 | 0.4531 |
| 0.2885 | 13.68 | 28 | 0.2005 | 0.4603 |
| 0.2838 | 14.68 | 30 | 0.1987 | 0.4690 |
| 0.2806 | 15.68 | 32 | 0.1975 | 0.4744 |
| 0.2772 | 16.68 | 34 | 0.1970 | 0.4765 |
| 0.2728 | 17.68 | 36 | 0.1939 | 0.4845 |
| 0.2684 | 18.68 | 38 | 0.1931 | 0.4858 |
| 0.2641 | 19.68 | 40 | 0.1925 | 0.4936 |
| 0.2608 | 20.68 | 42 | 0.1905 | 0.4929 |
| 0.2566 | 21.68 | 44 | 0.1886 | 0.5049 |
| 0.2518 | 22.68 | 46 | 0.1875 | 0.5095 |
| 0.2467 | 23.68 | 48 | 0.1869 | 0.5141 |
| 0.2424 | 24.68 | 50 | 0.1859 | 0.5161 |
| 0.2375 | 25.68 | 52 | 0.1850 | 0.5223 |
| 0.2329 | 26.68 | 54 | 0.1851 | 0.5210 |
| 0.2279 | 27.68 | 56 | 0.1850 | 0.5294 |
| 0.2226 | 28.68 | 58 | 0.1837 | 0.5310 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
masakhane/m2m100_418M_en_lug_rel | ce802ba7a7e5ee6d7b99c91942d4b9d41e89eb55 | 2022-05-05T14:29:05.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_en_lug_rel | 5 | null | transformers | 17,161 | ---
license: afl-3.0
---
|
benjamin/gpt2-wechsel-scottish-gaelic | 0eab6ebb4e06512d302aa76a89f1e958310c0d58 | 2022-07-13T23:39:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"gd",
"transformers",
"license:mit"
] | text-generation | false | benjamin | null | benjamin/gpt2-wechsel-scottish-gaelic | 5 | 1 | transformers | 17,162 | ---
language: gd
license: mit
---
# gpt2-wechsel-scottish-gaelic
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
| Model | PPL |
|---|---|
| `gpt2-wechsel-sundanese` | **111.72** |
| `gpt2` (retrained from scratch) | 149.46 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-scottish-gaelic` | **16.43** |
| `gpt2` (retrained from scratch) | 19.53 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-uyghur` | **34.33** |
| `gpt2` (retrained from scratch) | 42.82 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-malagasy` | **14.01** |
| `gpt2` (retrained from scratch) | 15.93 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
benjamin/gpt2-wechsel-sundanese | 73e26088c229d0a624e91137b159089d27a299c9 | 2022-07-13T23:45:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"su",
"transformers",
"license:mit"
] | text-generation | false | benjamin | null | benjamin/gpt2-wechsel-sundanese | 5 | null | transformers | 17,163 | ---
language: su
license: mit
---
# gpt2-wechsel-sundanese
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
| Model | PPL |
|---|---|
| `gpt2-wechsel-sundanese` | **111.72** |
| `gpt2` (retrained from scratch) | 149.46 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-scottish-gaelic` | **16.43** |
| `gpt2` (retrained from scratch) | 19.53 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-uyghur` | **34.33** |
| `gpt2` (retrained from scratch) | 42.82 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-malagasy` | **14.01** |
| `gpt2` (retrained from scratch) | 15.93 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
Khalsuu/english-filipino-wav2vec2-l-xls-r-test-02 | 88fa97ae20acdec8b64ad64f5eac44d7ea3172b9 | 2022-05-05T22:56:01.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:filipino_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Khalsuu | null | Khalsuu/english-filipino-wav2vec2-l-xls-r-test-02 | 5 | null | transformers | 17,164 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- filipino_voice
model-index:
- name: english-filipino-wav2vec2-l-xls-r-test-02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# english-filipino-wav2vec2-l-xls-r-test-02
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the filipino_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4561
- Wer: 0.2632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1707 | 2.09 | 400 | 0.8006 | 0.8224 |
| 0.4801 | 4.19 | 800 | 0.3363 | 0.4329 |
| 0.2541 | 6.28 | 1200 | 0.3365 | 0.3676 |
| 0.1851 | 8.38 | 1600 | 0.3485 | 0.3739 |
| 0.1408 | 10.47 | 2000 | 0.3628 | 0.3420 |
| 0.1098 | 12.57 | 2400 | 0.3979 | 0.3277 |
| 0.1019 | 14.66 | 2800 | 0.4031 | 0.2896 |
| 0.0887 | 16.75 | 3200 | 0.3977 | 0.3024 |
| 0.0798 | 18.85 | 3600 | 0.3959 | 0.3129 |
| 0.0671 | 20.94 | 4000 | 0.4489 | 0.3241 |
| 0.0633 | 23.04 | 4400 | 0.4455 | 0.3026 |
| 0.055 | 25.13 | 4800 | 0.4668 | 0.2910 |
| 0.0523 | 27.23 | 5200 | 0.4670 | 0.2960 |
| 0.0468 | 29.32 | 5600 | 0.4536 | 0.2781 |
| 0.0392 | 31.41 | 6000 | 0.4612 | 0.2860 |
| 0.0381 | 33.51 | 6400 | 0.4651 | 0.2841 |
| 0.034 | 35.6 | 6800 | 0.4723 | 0.2716 |
| 0.0315 | 37.7 | 7200 | 0.4546 | 0.2642 |
| 0.0294 | 39.79 | 7600 | 0.4561 | 0.2632 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
jimregan/wav2vec-ljspeech-splits | 401eeb2f6f79661b3c4ddd2ac2d1cc5dfb9fcbf2 | 2022-05-06T19:56:52.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jimregan | null | jimregan/wav2vec-ljspeech-splits | 5 | null | transformers | 17,165 | ---
license: apache-2.0
---
|
birgermoell/liepa-lithuanian | b56c3adf3543901f3e4986e3155ea251daf77fc8 | 2022-05-06T13:10:45.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | birgermoell | null | birgermoell/liepa-lithuanian | 5 | null | transformers | 17,166 | Entry not found |
allenai/tk-instruct-3b-def-pos-neg | 252a02ec7005f103a212f359684a6b83456af558 | 2022-05-27T06:30:55.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:natural instructions v2.0",
"arxiv:1910.10683",
"arxiv:2204.07705",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | allenai | null | allenai/tk-instruct-3b-def-pos-neg | 5 | null | transformers | 17,167 | ---
language: en
license: apache-2.0
datasets:
- natural instructions v2.0
---
# Model description
Tk-Instruct is a series of encoder-decoder Transformer models that are trained to solve various NLP tasks by following in-context instructions (plain language task definitions, k-shot examples, explanations, etc). Built upon the pre-trained [T5 models](https://arxiv.org/abs/1910.10683), they are fine-tuned on a large number of tasks & instructions that are collected in the [Natural Instructions benchmark](https://github.com/allenai/natural-instructions), which contains 1600+ tasks in 70+ broach categories in total. This enables the model to not only process the training tasks, but also generalize to many unseen tasks without further parameter update.
More resources for using the model:
- **Paper**: [link](https://arxiv.org/abs/2204.07705)
- **Code repository**: [Tk-Instruct](https://github.com/yizhongw/Tk-Instruct)
- **Official Website**: [Natural Instructions](https://instructions.apps.allenai.org/)
- **All released models**: [allenai/tk-instruct](https://huggingface.co/models?search=allenai/tk-instruct)
## Intended uses & limitations
Tk-Instruct can be used to do many NLP tasks by following instructions.
### How to use
When instructing the model, task definition or demonstration examples or explanations should be prepended to the original input and fed into the model. You can easily try Tk-Instruct models as follows:
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("allenai/tk-instruct-3b-def")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("allenai/tk-instruct-3b-def")
>>> input_ids = tokenizer.encode(
"Definition: return the currency of the given country. Now complete the following example - Input: India. Output:",
return_tensors="pt")
>>> output = model.generate(input_ids, max_length=10)
>>> output = tokenizer.decode(output[0], skip_special_tokens=True) # model should output 'Indian Rupee'
>>> input_ids = tokenizer.encode(
"Definition: negate the following sentence. Input: John went to school. Output:",
return_tensors="pt")
>>> output = model.generate(input_ids, max_length=10)
>>> output = tokenizer.decode(output[0], skip_special_tokens=True) # model should output 'John did not go to shool.'
```
### Limitations
We are still working on understanding the behaviors of these models, but here are several issues we have found:
- Models are generally sensitive to the instruction. Sometimes rewording the instruction can lead to very different output.
- Models are not always compliant to the instruction. Sometimes the model don't follow your instruction (e.g., when you ask the model to generate one sentence, it might still generate one word or a long story).
- Models might totally fail on some tasks.
If you find serious issues or any interesting result, you are welcome to share with us!
## Training data
Tk-Instruct is trained using the tasks & instructions in [Natural Instructions benchmark](https://github.com/allenai/natural-instructions), which contains 1600+ tasks in 70+ broach categories in total. We follow the official train/test split. Tk-Instruct model series were trained using 757 tasks, and mTk-Instruct series were trained using 1271 tasks (including some non-English tasks).
The training tasks are in 64 broad categories, such as text categorization / question answering / sentiment analysis / summarization / grammar error detection / dialogue generation / etc. The other 12 categories are selected for evaluation.
## Training procedure
All our models are initialized from either T5 models or mT5 models. Because generating the output can be regarded as a form of language modeling, we used their [LM adapted version](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k). All data is converted into a text-to-text format, and models are fine-tuned to maximize the likelihood of the output sequence.
Our [released models](https://huggingface.co/models?search=allenai/tk-instruct) are in different sizes, and each of them was trained with a specific type of instruction encoding. For instance, `tk-instruct-3b-def-pos` was initialized from [t5-xl-lm-adapt](https://huggingface.co/google/t5-xl-lm-adapt), and it saw task definition & 2 positive examples as the instruction during training time.
Although they are trained with only one type of instruction encodings, we found they can usually work with other type of encodings at test time (see more in our paper).
### BibTeX entry and citation info
```bibtex
@article{wang2022benchmarking,
title={Benchmarking Generalization via In-Context Instructions on 1,600+ Language Tasks},
author={Yizhong Wang and Swaroop Mishra and Pegah Alipoormolabashi and Yeganeh Kordi and Amirreza Mirzaei and A. Arunkumar and Arjun Ashok and Arut Selvan Dhanasekaran and Atharva Naik and David Stap and Eshaan Pathak and Giannis Karamanolakis and Haizhi Gary Lai and Ishan Purohit and Ishani Mondal and Jacob Anderson and Kirby Kuznia and Krima Doshi and Maitreya Patel and Kuntal Kumar Pal and M. Moradshahi and Mihir Parmar and Mirali Purohit and Neeraj Varshney and Phani Rohitha Kaza and Pulkit Verma and Ravsehaj Singh Puri and Rushang Karia and Shailaja Keyur Sampat and Savan Doshi and Siddharth Deepak Mishra and Sujan C. Reddy and Sumanta Patro and Tanay Dixit and Xu-dong Shen and Chitta Baral and Yejin Choi and Hannaneh Hajishirzi and Noah A. Smith and Daniel Khashabi},
year={2022},
archivePrefix={arXiv},
eprint={2204.07705},
primaryClass={cs.CL},
}
``` |
chrishistewandb/hugging-face | 3236dd0ec6901dc8e38f48606c234e0a2c79ec80 | 2022-05-11T19:49:11.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | chrishistewandb | null | chrishistewandb/hugging-face | 5 | null | transformers | 17,168 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: hugging-face
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hugging-face
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
omar47/wav2vec2-large-xls-r-300m-urdu-v2 | e9346a4ed9380cd80d5078de2081c1b26322b288 | 2022-05-14T04:53:01.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | omar47 | null | omar47/wav2vec2-large-xls-r-300m-urdu-v2 | 5 | null | transformers | 17,169 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-urdu-CV_8_0-and-PRUS_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-urdu-CV_8_0-and-PRUS_v2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3541
- Wer: 0.6532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 14.8521 | 0.52 | 32 | 20.0617 | 1.0 |
| 9.2152 | 1.05 | 64 | 7.8943 | 1.0 |
| 4.8598 | 1.57 | 96 | 5.1558 | 1.0 |
| 3.866 | 2.1 | 128 | 3.9680 | 1.0 |
| 3.3517 | 2.62 | 160 | 3.4201 | 1.0 |
| 3.2029 | 3.15 | 192 | 3.2355 | 1.0 |
| 3.1509 | 3.67 | 224 | 3.2337 | 1.0 |
| 3.1399 | 4.2 | 256 | 3.1627 | 1.0 |
| 3.0848 | 4.72 | 288 | 3.0550 | 1.0 |
| 2.9806 | 5.25 | 320 | 2.8343 | 0.9996 |
| 2.3814 | 5.77 | 352 | 2.0685 | 0.9523 |
| 1.2936 | 6.3 | 384 | 1.5907 | 0.8657 |
| 0.8656 | 6.82 | 416 | 1.3810 | 0.8235 |
| 0.7014 | 7.34 | 448 | 1.3838 | 0.7920 |
| 0.6015 | 7.87 | 480 | 1.3479 | 0.8046 |
| 0.5341 | 8.39 | 512 | 1.2613 | 0.7757 |
| 0.5031 | 8.92 | 544 | 1.2818 | 0.7890 |
| 0.4349 | 9.44 | 576 | 1.3171 | 0.7739 |
| 0.4198 | 9.97 | 608 | 1.2420 | 0.7750 |
| 0.3593 | 10.49 | 640 | 1.2991 | 0.7587 |
| 0.3252 | 11.02 | 672 | 1.2653 | 0.7228 |
| 0.2715 | 11.54 | 704 | 1.2488 | 0.7350 |
| 0.2733 | 12.07 | 736 | 1.2639 | 0.7110 |
| 0.2338 | 12.59 | 768 | 1.3733 | 0.7454 |
| 0.2403 | 13.11 | 800 | 1.3908 | 0.7228 |
| 0.2106 | 13.64 | 832 | 1.3384 | 0.7224 |
| 0.2041 | 14.16 | 864 | 1.3770 | 0.7050 |
| 0.1814 | 14.69 | 896 | 1.3526 | 0.6932 |
| 0.1742 | 15.21 | 928 | 1.3486 | 0.6895 |
| 0.1658 | 15.74 | 960 | 1.3210 | 0.6936 |
| 0.1455 | 16.26 | 992 | 1.3292 | 0.6858 |
| 0.1399 | 16.79 | 1024 | 1.3521 | 0.6828 |
| 0.1325 | 17.31 | 1056 | 1.3339 | 0.6876 |
| 0.1256 | 17.84 | 1088 | 1.3389 | 0.6836 |
| 0.1219 | 18.36 | 1120 | 1.3496 | 0.6769 |
| 0.1212 | 18.89 | 1152 | 1.3277 | 0.6776 |
| 0.1097 | 19.41 | 1184 | 1.3594 | 0.6762 |
| 0.1129 | 19.93 | 1216 | 1.3448 | 0.6688 |
| 0.1036 | 20.46 | 1248 | 1.3295 | 0.6710 |
| 0.1035 | 20.98 | 1280 | 1.3243 | 0.6577 |
| 0.094 | 21.51 | 1312 | 1.3832 | 0.6591 |
| 0.0912 | 22.03 | 1344 | 1.3857 | 0.6584 |
| 0.0815 | 22.56 | 1376 | 1.3739 | 0.6547 |
| 0.0864 | 23.08 | 1408 | 1.3649 | 0.6554 |
| 0.0772 | 23.61 | 1440 | 1.3791 | 0.6458 |
| 0.0894 | 24.13 | 1472 | 1.3630 | 0.6488 |
| 0.0776 | 24.66 | 1504 | 1.3541 | 0.6532 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
miazhao/deberta_base_model_s3_ccnet_airbnb_dat_continue | 328d9fb472f7bb6b4dd742f7ee25197ca927b9a3 | 2022-05-12T22:02:30.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | miazhao | null | miazhao/deberta_base_model_s3_ccnet_airbnb_dat_continue | 5 | null | transformers | 17,170 | Entry not found |
Jiexing/spider_relation_t5_3b-2624 | bea41b2abb1734e76cf318bf38380b3a6a44fd9e | 2022-05-08T01:49:12.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Jiexing | null | Jiexing/spider_relation_t5_3b-2624 | 5 | null | transformers | 17,171 | Entry not found |
Jeevesh8/bert_ft_cola-0 | ee1d00855b173f564ff2d47a4e9a9f1f10443e81 | 2022-05-09T08:58:40.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-0 | 5 | null | transformers | 17,172 | Entry not found |
Chramer/remote-sensing-distilbert-cased | 1cd7f7c7a7ca9aafce748696610d6b9e6356f055 | 2022-05-10T10:11:16.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Chramer | null | Chramer/remote-sensing-distilbert-cased | 5 | 1 | transformers | 17,173 | ---
widget:
- text: "Earth [MASK] is a growing field."
- text: "Multiple [MASK] channels enable full polarimetry"
- text: "The [MASK] is capable of measuring in limb and nadir geometry"
---
# RemoteSensing Distilbert

The field of earth observation is increasingly growing. More and more data scientists are interested about this domain, and they're developing computer vision applications that do amazing things, while NLP doesn't seem to be given much consideration in this area
That's why I posted [Chramer/remote-sensing-distilbert-cased](https://huggingface.co/Chramer/remote-sensing-distilbert-cased). This is masked language model trained on a corpus of technical information about space missions, instruments, and sensors.
The model is based on [distilbert-base-cased](https://huggingface.co/distilbert-base-uncased), but I didn't have the chance to play with the hyperparameters of the model because of the limited computational capabilities I have. So there's a lot to improve! 😆
It was fun to publish my first model on hugging face! 🤩
**Author:** Marcello Politi ([Twitter 🐦](https://twitter.com/_March08_) ,[LinkedIn 💼](https://www.linkedin.com/in/marcello-politi/)).
# Perplexity
Test set: 4.5k sentences about technical space stuff.
| Model | Perplexity |
| ------ | ------ |
| remote-sensing-distilbert-cased | **6.45** |
| distilbert-base-cased | 33.77 |
# Usage
```python
from transformers import AutoModel, AutoTokenizer
model_name = "Chramer/remote-sensing-distilbert-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
|
Jeevesh8/bert_ft_cola-11 | b56b405dd4cc966cd9c23a029cd9bb791099baff | 2022-05-09T14:01:29.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-11 | 5 | null | transformers | 17,174 | Entry not found |
Jeevesh8/bert_ft_cola-12 | 4bb3b2d0a54bbb05295ed4bbb56c8ddbdd4d04c5 | 2022-05-09T14:02:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-12 | 5 | null | transformers | 17,175 | Entry not found |
Jeevesh8/bert_ft_cola-20 | d2a74272ba07f781c89c2d078cbf5a210c7c5bd7 | 2022-05-09T14:07:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-20 | 5 | null | transformers | 17,176 | Entry not found |
Jeevesh8/bert_ft_cola-21 | 472ec9ff54bd0c00ce1dba748921df5ae0147b6a | 2022-05-09T14:08:07.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-21 | 5 | null | transformers | 17,177 | Entry not found |
Jeevesh8/bert_ft_cola-39 | e76f64290dd9a535305e7653697ee729e76617fc | 2022-05-09T14:19:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-39 | 5 | null | transformers | 17,178 | Entry not found |
Jeevesh8/bert_ft_cola-45 | 35b56409280e7376f4282b22c5d5d380d8d9ac43 | 2022-05-09T14:24:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-45 | 5 | null | transformers | 17,179 | Entry not found |
Jeevesh8/bert_ft_cola-55 | e829ad8278d15b64515a73117aa93574c30aa7ae | 2022-05-09T14:30:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-55 | 5 | null | transformers | 17,180 | Entry not found |
Jeevesh8/bert_ft_cola-59 | 6533ddef00b478ed6f530405181183b294801624 | 2022-05-09T14:33:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-59 | 5 | null | transformers | 17,181 | Entry not found |
Jeevesh8/bert_ft_cola-62 | 538d03ffb3426b8bda53b00326026936f35ae30c | 2022-05-09T14:35:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-62 | 5 | null | transformers | 17,182 | Entry not found |
Jeevesh8/bert_ft_cola-63 | 954328c12b11a1bb0818d5b8c5412a7d5fa0ec8e | 2022-05-09T14:36:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-63 | 5 | null | transformers | 17,183 | Entry not found |
Jeevesh8/bert_ft_cola-65 | a5b1b57e1b53fadf8169e11d411dd585acdba5bf | 2022-05-09T14:37:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-65 | 5 | null | transformers | 17,184 | Entry not found |
Jeevesh8/bert_ft_cola-79 | bd9284f703518135f76668bbd0c7492f5459eba6 | 2022-05-09T14:46:46.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-79 | 5 | null | transformers | 17,185 | Entry not found |
Jeevesh8/bert_ft_cola-83 | 75ab1e6d032428b19fb28e5e0d16caf818fe1bf0 | 2022-05-09T14:49:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-83 | 5 | null | transformers | 17,186 | Entry not found |
Jeevesh8/bert_ft_cola-85 | 7ff84c04bc55248e9962963cca0263b454dff793 | 2022-05-09T14:50:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-85 | 5 | null | transformers | 17,187 | Entry not found |
princeton-nlp/CoFi-CoLA-s95 | bd503bc66975127fb1985209f66864f5cc751db3 | 2022-05-09T15:24:06.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2204.00408",
"transformers"
] | text-classification | false | princeton-nlp | null | princeton-nlp/CoFi-CoLA-s95 | 5 | null | transformers | 17,188 | This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 95% sparsity on dataset CoLA. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model. |
Santiagot1105/wav2vec2-large-xlsr-es-col-pro | 55d281c6d325b83fac1b8e63295ea2044504357e | 2022-05-10T11:19:34.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Santiagot1105 | null | Santiagot1105/wav2vec2-large-xlsr-es-col-pro | 5 | null | transformers | 17,189 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-es-col-pro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-es-col-pro
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-spanish](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0636
- Wer: 0.0507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1032 | 7.4 | 400 | 0.0618 | 0.0656 |
| 0.0687 | 14.81 | 800 | 0.0670 | 0.0619 |
| 0.0402 | 22.22 | 1200 | 0.0693 | 0.0573 |
| 0.0252 | 29.62 | 1600 | 0.0636 | 0.0507 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
mrm8488/electricidad-base-finetuned-parmex | 67664a9a539bcf3618abd53a6bf638cd825090eb | 2022-05-10T08:18:19.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | mrm8488 | null | mrm8488/electricidad-base-finetuned-parmex | 5 | 1 | transformers | 17,190 | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: electricidad-base-finetuned-parmex
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electricidad-base-finetuned-parmex
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0372
- F1: 0.9764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8.309269976237555e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 208 | 0.0377 | 0.9801 |
| No log | 2.0 | 416 | 0.0372 | 0.9764 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
selimonder/gptj-bswiki-8bit | 3a856072f7f76f8157d92de53aff058103642dd7 | 2022-05-10T09:30:07.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | selimonder | null | selimonder/gptj-bswiki-8bit | 5 | null | transformers | 17,191 | Entry not found |
lucifermorninstar011/autotrain-defector_ner-846726994 | bdbb11b3fb6d723a93fb92d176279ca1f3868c0f | 2022-05-10T11:58:10.000Z | [
"pytorch",
"distilbert",
"token-classification",
"en",
"dataset:lucifermorninstar011/autotrain-data-defector_ner",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | token-classification | false | lucifermorninstar011 | null | lucifermorninstar011/autotrain-defector_ner-846726994 | 5 | null | transformers | 17,192 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- lucifermorninstar011/autotrain-data-defector_ner
co2_eq_emissions: 101.31873212485134
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 846726994
- CO2 Emissions (in grams): 101.31873212485134
## Validation Metrics
- Loss: 0.032001420855522156
- Accuracy: 0.9895226362258249
- Precision: 0.9431602948450375
- Recall: 0.9486306771989856
- F1: 0.945887576828147
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/lucifermorninstar011/autotrain-defector_ner-846726994
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("lucifermorninstar011/autotrain-defector_ner-846726994", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("lucifermorninstar011/autotrain-defector_ner-846726994", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Kepa/exist_task1_es | 67a2156a1b82c35fd0a7fcc78f34829991294c66 | 2022-05-10T14:08:10.000Z | [
"pytorch",
"xlm-roberta",
"transformers"
] | null | false | Kepa | null | Kepa/exist_task1_es | 5 | null | transformers | 17,193 | |
florentgbelidji/setfit_emotion | 63f01ba380f0c0672d48aa25674ac048493cd975 | 2022-05-10T12:57:31.000Z | [
"pytorch",
"mpnet",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | florentgbelidji | null | florentgbelidji/setfit_emotion | 5 | null | sentence-transformers | 17,194 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# florentgbelidji/setfit_emotion
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('florentgbelidji/setfit_emotion')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=florentgbelidji/setfit_emotion)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 203 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.dataloader._InfiniteConstantSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.BatchHardTripletLoss.BatchHardTripletLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 4060,
"warmup_steps": 406,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
taln-ls2n/POET | 633769c260b7f4116dc0e0fa3068b69083f04964 | 2022-07-06T23:49:35.000Z | [
"pytorch",
"camembert",
"token-classification",
"fr",
"dataset:qanastek/ANTILLES",
"arxiv:1911.03894",
"transformers",
"Transformers",
"sequence-tagger-model",
"autotrain_compatible"
] | token-classification | false | taln-ls2n | null | taln-ls2n/POET | 5 | 1 | transformers | 17,195 | ---
tags:
- Transformers
- token-classification
- sequence-tagger-model
language: fr
datasets:
- qanastek/ANTILLES
widget:
- text: "George Washington est allé à Washington"
---
# POET: A French Extended Part-of-Speech Tagger
- Corpora: [ANTILLES](https://github.com/qanastek/ANTILLES)
- Embeddings & Sequence Labelling: [CamemBERT](https://arxiv.org/abs/1911.03894)
- Number of Epochs: 115
**People Involved**
* [LABRAK Yanis](https://www.linkedin.com/in/yanis-labrak-8a7412145/) (1)
* [DUFOUR Richard](https://cv.archives-ouvertes.fr/richard-dufour) (2)
**Affiliations**
1. [LIA, NLP team](https://lia.univ-avignon.fr/), Avignon University, Avignon, France.
2. [LS2N, TALN team](https://www.ls2n.fr/equipe/taln/), Nantes University, Nantes, France.
## Demo: How to use in HuggingFace Transformers
Requires [transformers](https://pypi.org/project/transformers/): ```pip install transformers```
```python
from transformers import CamembertTokenizer, CamembertForTokenClassification, TokenClassificationPipeline
tokenizer = CamembertTokenizer.from_pretrained('taln-ls2n/POET')
model = CamembertForTokenClassification.from_pretrained('taln-ls2n/POET')
pos = TokenClassificationPipeline(model=model, tokenizer=tokenizer)
def make_prediction(sentence):
labels = [l['entity'] for l in pos(sentence)]
return list(zip(sentence.split(" "), labels))
res = make_prediction("George Washington est allé à Washington")
```
Output:

## Training data
`ANTILLES` is a part-of-speech tagging corpora based on [UD_French-GSD](https://universaldependencies.org/treebanks/fr_gsd/index.html) which was originally created in 2015 and is based on the [universal dependency treebank v2.0](https://github.com/ryanmcd/uni-dep-tb).
Originally, the corpora consists of 400,399 words (16,341 sentences) and had 17 different classes. Now, after applying our tags augmentation we obtain 60 different classes which add linguistic and semantic information such as the gender, number, mood, person, tense or verb form given in the different CoNLL-03 fields from the original corpora.
We based our tags on the level of details given by the [LIA_TAGG](http://pageperso.lif.univ-mrs.fr/frederic.bechet/download.html) statistical POS tagger written by [Frédéric Béchet](http://pageperso.lif.univ-mrs.fr/frederic.bechet/index-english.html) in 2001.
The corpora used for this model is available on [Github](https://github.com/qanastek/ANTILLES) at the [CoNLL-U format](https://universaldependencies.org/format.html).
Training data are fed to the model as free language and doesn't pass a normalization phase. Thus, it's made the model case and punctuation sensitive.
## Original Tags
```plain
PRON VERB SCONJ ADP CCONJ DET NOUN ADJ AUX ADV PUNCT PROPN NUM SYM PART X INTJ
```
## New additional POS tags
| Abbreviation | Description | Examples |
|:--------:|:--------:|:--------:|
| PREP | Preposition | de |
| AUX | Auxiliary Verb | est |
| ADV | Adverb | toujours |
| COSUB | Subordinating conjunction | que |
| COCO | Coordinating Conjunction | et |
| PART | Demonstrative particle | -t |
| PRON | Pronoun | qui ce quoi |
| PDEMMS | Demonstrative Pronoun - Singular Masculine | ce |
| PDEMMP | Demonstrative Pronoun - Plural Masculine | ceux |
| PDEMFS | Demonstrative Pronoun - Singular Feminine | cette |
| PDEMFP | Demonstrative Pronoun - Plural Feminine | celles |
| PINDMS | Indefinite Pronoun - Singular Masculine | tout |
| PINDMP | Indefinite Pronoun - Plural Masculine | autres |
| PINDFS | Indefinite Pronoun - Singular Feminine | chacune |
| PINDFP | Indefinite Pronoun - Plural Feminine | certaines |
| PROPN | Proper noun | Houston |
| XFAMIL | Last name | Levy |
| NUM | Numerical Adjective | trentaine vingtaine |
| DINTMS | Masculine Numerical Adjective | un |
| DINTFS | Feminine Numerical Adjective | une |
| PPOBJMS | Pronoun complements of objects - Singular Masculine | le lui |
| PPOBJMP | Pronoun complements of objects - Plural Masculine | eux y |
| PPOBJFS | Pronoun complements of objects - Singular Feminine | moi la |
| PPOBJFP | Pronoun complements of objects - Plural Feminine | en y |
| PPER1S | Personal Pronoun First-Person - Singular | je |
| PPER2S | Personal Pronoun Second-Person - Singular | tu |
| PPER3MS | Personal Pronoun Third-Person - Singular Masculine | il |
| PPER3MP | Personal Pronoun Third-Person - Plural Masculine | ils |
| PPER3FS | Personal Pronoun Third-Person - Singular Feminine | elle |
| PPER3FP | Personal Pronoun Third-Person - Plural Feminine | elles |
| PREFS | Reflexive Pronoun First-Person - Singular | me m' |
| PREF | Reflexive Pronoun Third-Person - Singular | se s' |
| PREFP | Reflexive Pronoun First / Second-Person - Plural | nous vous |
| VERB | Verb | obtient |
| VPPMS | Past Participle - Singular Masculine | formulé |
| VPPMP | Past Participle - Plural Masculine | classés |
| VPPFS | Past Participle - Singular Feminine | appelée |
| VPPFP | Past Participle - Plural Feminine | sanctionnées |
| DET | Determinant | les l' |
| DETMS | Determinant - Singular Masculine | les |
| DETFS | Determinant - Singular Feminine | la |
| ADJ | Adjective | capable sérieux |
| ADJMS | Adjective - Singular Masculine | grand important |
| ADJMP | Adjective - Plural Masculine | grands petits |
| ADJFS | Adjective - Singular Feminine | française petite |
| ADJFP | Adjective - Plural Feminine | légères petites |
| NOUN | Noun | temps |
| NMS | Noun - Singular Masculine | drapeau |
| NMP | Noun - Plural Masculine | journalistes |
| NFS | Noun - Singular Feminine | tête |
| NFP | Noun - Plural Feminine | ondes |
| PREL | Relative Pronoun | qui dont |
| PRELMS | Relative Pronoun - Singular Masculine | lequel |
| PRELMP | Relative Pronoun - Plural Masculine | lesquels |
| PRELFS | Relative Pronoun - Singular Feminine | laquelle |
| PRELFP | Relative Pronoun - Plural Feminine | lesquelles |
| INTJ | Interjection | merci bref |
| CHIF | Numbers | 1979 10 |
| SYM | Symbol | € % |
| YPFOR | Endpoint | . |
| PUNCT | Ponctuation | : , |
| MOTINC | Unknown words | Technology Lady |
| X | Typos & others | sfeir 3D statu |
## Evaluation results
The test corpora used for this evaluation is available on [Github](https://github.com/qanastek/ANTILLES/blob/main/ANTILLES/test.conllu).
```plain
precision recall f1-score support
ADJ 0.9040 0.8828 0.8933 128
ADJFP 0.9811 0.9585 0.9697 434
ADJFS 0.9606 0.9826 0.9715 918
ADJMP 0.9613 0.9357 0.9483 451
ADJMS 0.9561 0.9611 0.9586 952
ADV 0.9870 0.9948 0.9908 1524
AUX 0.9956 0.9964 0.9960 1124
CHIF 0.9798 0.9774 0.9786 1239
COCO 1.0000 0.9989 0.9994 884
COSUB 0.9939 0.9939 0.9939 328
DET 0.9972 0.9972 0.9972 2897
DETFS 0.9990 1.0000 0.9995 1007
DETMS 1.0000 0.9993 0.9996 1426
DINTFS 0.9967 0.9902 0.9934 306
DINTMS 0.9923 0.9948 0.9935 387
INTJ 0.8000 0.8000 0.8000 5
MOTINC 0.5049 0.5827 0.5410 266
NFP 0.9807 0.9675 0.9740 892
NFS 0.9778 0.9699 0.9738 2588
NMP 0.9687 0.9495 0.9590 1367
NMS 0.9759 0.9560 0.9659 3181
NOUN 0.6164 0.8673 0.7206 113
NUM 0.6250 0.8333 0.7143 6
PART 1.0000 0.9375 0.9677 16
PDEMFP 1.0000 1.0000 1.0000 3
PDEMFS 1.0000 1.0000 1.0000 89
PDEMMP 1.0000 1.0000 1.0000 20
PDEMMS 1.0000 1.0000 1.0000 222
PINDFP 1.0000 1.0000 1.0000 3
PINDFS 0.8571 1.0000 0.9231 12
PINDMP 0.9000 1.0000 0.9474 9
PINDMS 0.9286 0.9701 0.9489 67
PINTFS 0.0000 0.0000 0.0000 2
PPER1S 1.0000 1.0000 1.0000 62
PPER2S 0.7500 1.0000 0.8571 3
PPER3FP 1.0000 1.0000 1.0000 9
PPER3FS 1.0000 1.0000 1.0000 96
PPER3MP 1.0000 1.0000 1.0000 31
PPER3MS 1.0000 1.0000 1.0000 377
PPOBJFP 1.0000 0.7500 0.8571 4
PPOBJFS 0.9167 0.8919 0.9041 37
PPOBJMP 0.7500 0.7500 0.7500 12
PPOBJMS 0.9371 0.9640 0.9504 139
PREF 1.0000 1.0000 1.0000 332
PREFP 1.0000 1.0000 1.0000 64
PREFS 1.0000 1.0000 1.0000 13
PREL 0.9964 0.9964 0.9964 277
PRELFP 1.0000 1.0000 1.0000 5
PRELFS 0.8000 1.0000 0.8889 4
PRELMP 1.0000 1.0000 1.0000 3
PRELMS 1.0000 1.0000 1.0000 11
PREP 0.9971 0.9977 0.9974 6161
PRON 0.9836 0.9836 0.9836 61
PROPN 0.9468 0.9503 0.9486 4310
PUNCT 1.0000 1.0000 1.0000 4019
SYM 0.9394 0.8158 0.8732 76
VERB 0.9956 0.9921 0.9938 2273
VPPFP 0.9145 0.9469 0.9304 113
VPPFS 0.9562 0.9597 0.9580 273
VPPMP 0.8827 0.9728 0.9256 147
VPPMS 0.9778 0.9794 0.9786 630
VPPRE 0.0000 0.0000 0.0000 1
X 0.9604 0.9935 0.9766 1073
XFAMIL 0.9386 0.9113 0.9248 1342
YPFOR 1.0000 1.0000 1.0000 2750
accuracy 0.9778 47574
macro avg 0.9151 0.9285 0.9202 47574
weighted avg 0.9785 0.9778 0.9780 47574
```
## BibTeX Citations
Please cite the following paper when using this model.
ANTILLES corpus and POET taggers:
```latex
@inproceedings{labrak:hal-03696042,
TITLE = {{ANTILLES: An Open French Linguistically Enriched Part-of-Speech Corpus}},
AUTHOR = {Labrak, Yanis and Dufour, Richard},
URL = {https://hal.archives-ouvertes.fr/hal-03696042},
BOOKTITLE = {{25th International Conference on Text, Speech and Dialogue (TSD)}},
ADDRESS = {Brno, Czech Republic},
PUBLISHER = {{Springer}},
YEAR = {2022},
MONTH = Sep,
KEYWORDS = {Part-of-speech corpus ; POS tagging ; Open tools ; Word embeddings ; Bi-LSTM ; CRF ; Transformers},
PDF = {https://hal.archives-ouvertes.fr/hal-03696042/file/ANTILLES_A_freNch_linguisTIcaLLy_Enriched_part_of_Speech_corpus.pdf},
HAL_ID = {hal-03696042},
HAL_VERSION = {v1},
}
```
UD_French-GSD corpora:
```latex
@misc{
universaldependencies,
title={UniversalDependencies/UD_French-GSD},
url={https://github.com/UniversalDependencies/UD_French-GSD}, journal={GitHub},
author={UniversalDependencies}
}
```
LIA TAGG:
```latex
@techreport{LIA_TAGG,
author = {Frédéric Béchet},
title = {LIA_TAGG: a statistical POS tagger + syntactic bracketer},
institution = {Aix-Marseille University & CNRS},
year = {2001}
}
```
Flair Embeddings:
```latex
@inproceedings{akbik2018coling,
title={Contextual String Embeddings for Sequence Labeling},
author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland},
booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics},
pages = {1638--1649},
year = {2018}
}
```
## Acknowledgment
This work was financially supported by [Zenidoc](https://zenidoc.fr/) and the [ANR project DIETS](https://anr-diets.univ-avignon.fr) under the contract [ANR-20-CE23-0005](https://anr.fr/en/funded-projects-and-impact/funded-projects/project/funded/project/b2d9d3668f92a3b9fbbf7866072501ef-fd7e69d902/?tx_anrprojects_funded%5Bcontroller%5D=Funded&cHash=cb6d54d24c9e21e0d50fabf46bd56646).
|
wvangils/DistilGPT2-Beatles-Lyrics-finetuned | 66ff22af3ea4ff3018bc409df0532b86bbeb21fa | 2022-05-11T11:44:35.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | wvangils | null | wvangils/DistilGPT2-Beatles-Lyrics-finetuned | 5 | null | transformers | 17,196 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: DistilGPT2-Beatles-Lyrics-finetuned
results: []
widget:
- text: "Last night in Kiev the"
example_title: "Kiev"
- text: "It hasn't rained in weeks"
example_title: "Rain"
---
# DistilGPT2-Beatles-Lyrics-finetuned
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the [Huggingartists - beatles](https://huggingface.co/datasets/huggingartists/the-beatles) dataset. It will complete an input prompt with Beatles-like text.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.748 | 1.0 | 165 | 2.3732 |
| 2.4395 | 2.0 | 330 | 2.1938 |
| 2.2968 | 3.0 | 495 | 2.1118 |
| 2.2075 | 4.0 | 660 | 2.0721 |
| 2.1393 | 5.0 | 825 | 2.0571 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.0
- Tokenizers 0.12.1
|
Matthijs/mobilevit-x-small | 98efb39f22843428ba6d78d5999f9fbdb844e1e3 | 2022-05-11T14:43:30.000Z | [
"pytorch",
"mobilevit",
"image-classification",
"transformers"
] | image-classification | false | Matthijs | null | Matthijs/mobilevit-x-small | 5 | null | transformers | 17,197 | Entry not found |
ceggian/sbert_pt_reddit_mnr_64 | c014a59c01c7db8448544513616736b18e50ecfb | 2022-05-11T20:10:24.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ceggian | null | ceggian/sbert_pt_reddit_mnr_64 | 5 | null | sentence-transformers | 17,198 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 39289 with parameters:
```
{'batch_size': 8}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3928,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
CleveGreen/FieldClassifier_v3_gpt | b932ba76842f85b240025869a901f4a81d7db79a | 2022-05-11T20:39:29.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | CleveGreen | null | CleveGreen/FieldClassifier_v3_gpt | 5 | null | transformers | 17,199 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.