modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
amirbr/finetuning-sentiment-model-3000-samples | ceb533c91e6dee933b1d4ef0901b60fc16077603 | 2022-05-02T20:06:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | amirbr | null | amirbr/finetuning-sentiment-model-3000-samples | 4 | null | transformers | 19,500 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
adielsa/distilbert-base-uncased-finetuned-cola | d921c1a21722baca97027d3abed6a6dd7f65b947 | 2022-04-30T12:37:50.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | adielsa | null | adielsa/distilbert-base-uncased-finetuned-cola | 4 | null | transformers | 19,501 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5387376669923544
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8256
- Matthews Correlation: 0.5387
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5257 | 1.0 | 535 | 0.5286 | 0.4093 |
| 0.3447 | 2.0 | 1070 | 0.5061 | 0.4972 |
| 0.2303 | 3.0 | 1605 | 0.5878 | 0.5245 |
| 0.1761 | 4.0 | 2140 | 0.7969 | 0.5153 |
| 0.1346 | 5.0 | 2675 | 0.8256 | 0.5387 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
TehranNLP-org/electra-base-mnli | d4aeffccce83c440cc2f163705eb17c3b165954c | 2022-05-03T17:01:07.000Z | [
"pytorch",
"electra",
"text-classification",
"en",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | TehranNLP-org | null | TehranNLP-org/electra-base-mnli | 4 | null | transformers | 19,502 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SEED0042
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: MNLI
type: ''
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8879266428935303
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SEED0042
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4265
- Accuracy: 0.8879
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: not_parallel
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3762 | 1.0 | 12272 | 0.3312 | 0.8794 |
| 0.2542 | 2.0 | 24544 | 0.3467 | 0.8843 |
| 0.1503 | 3.0 | 36816 | 0.4265 | 0.8879 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.11.6
|
TehranNLP-org/bert-large-mnli | cd33ad6c53d6724570c15d72e02ceae5145d8a08 | 2022-05-03T17:02:10.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | TehranNLP-org | null | TehranNLP-org/bert-large-mnli | 4 | null | transformers | 19,503 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SEED0042
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: MNLI
type: ''
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8572592969943963
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SEED0042
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5092
- Accuracy: 0.8573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: not_parallel
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4736 | 1.0 | 12271 | 0.4213 | 0.8372 |
| 0.3248 | 2.0 | 24542 | 0.4055 | 0.8538 |
| 0.1571 | 3.0 | 36813 | 0.5092 | 0.8573 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.11.6
|
victoriapl01/sensitive_spanish_classifier | 298fff1bbbd78ca980eac5b1b3866c3861b093e4 | 2022-04-30T19:24:38.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | victoriapl01 | null | victoriapl01/sensitive_spanish_classifier | 4 | 1 | transformers | 19,504 | Entry not found |
radicalrascal/DialoGPT-medium-jimmy | 214b8e96bd7872e6f89b5ea554fc7e8a94d7ef08 | 2022-04-30T20:59:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"lm-head",
"causal-lm"
] | conversational | false | radicalrascal | null | radicalrascal/DialoGPT-medium-jimmy | 4 | null | transformers | 19,505 | ---
tags:
- conversational
- lm-head
- causal-lm
---
# Jimmy DialoGPT Model |
Yanael/bert-finetuned-mrpc | 4218fbeddaa87af9656a41d10229c59537f89027 | 2022-05-01T15:25:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Yanael | null | Yanael/bert-finetuned-mrpc | 4 | null | transformers | 19,506 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: bert-finetuned-mrpc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.8.1+cu102
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Nakul24/Spanbert-emotion-extraction | 7ad9e842c3699257202e0f6ea8a5c0982eea12ec | 2022-05-03T05:10:03.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Nakul24 | null | Nakul24/Spanbert-emotion-extraction | 4 | 1 | transformers | 19,507 | Enter the Name of Emotion in the Question Field
Enter The Text from which emotion has to be extracted
Example 1-
Question - Guilty
Context - I shouted to my mom
Example 2 -
Question - Sad
Context - I felt betrayed when my girlfriend kissed another guy even though she was drunk
Note: Model is still under development stage so results might be a little strange |
Yanael/dummy-model | 2193695a917c6391ca9bdfca256558c774b1cd52 | 2022-05-01T20:00:15.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Yanael | null | Yanael/dummy-model | 4 | null | transformers | 19,508 | # Dummy Model
Following the Hugging Face course |
crcb/emo_go_new | 72335e185cc316f5dfa9e35f30b8b05988a42c3f | 2022-05-02T04:17:02.000Z | [
"pytorch",
"roberta",
"text-classification",
"unk",
"dataset:crcb/autotrain-data-go_emo_new",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | crcb | null | crcb/emo_go_new | 4 | null | transformers | 19,509 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- crcb/autotrain-data-go_emo_new
co2_eq_emissions: 20.58663910106142
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 813325491
- CO2 Emissions (in grams): 20.58663910106142
## Validation Metrics
- Loss: 1.3628994226455688
- Accuracy: 0.5920355494787216
- Macro F1: 0.4844439507523978
- Micro F1: 0.5920355494787216
- Weighted F1: 0.5873137663478112
- Macro Precision: 0.5458988948121151
- Micro Precision: 0.5920355494787216
- Weighted Precision: 0.591386299522425
- Macro Recall: 0.4753100798358001
- Micro Recall: 0.5920355494787216
- Weighted Recall: 0.5920355494787216
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/crcb/autotrain-go_emo_new-813325491
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("crcb/autotrain-go_emo_new-813325491", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("crcb/autotrain-go_emo_new-813325491", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
ali2066/DistilBERTFINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False | 225ed0acd69b8fa2cdf572dd83ba9e7dab12e363 | 2022-05-02T13:37:28.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | ali2066 | null | ali2066/DistilBERTFINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False | 4 | null | transformers | 19,510 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: DistilBERTFINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERTFINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2555
- Precision: 1.0
- Recall: 0.0200
- F1: 0.0393
- Accuracy: 0.0486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 95 | 0.5756 | nan | 0.0 | nan | 0.715 |
| No log | 2.0 | 190 | 0.5340 | 0.6429 | 0.1579 | 0.2535 | 0.735 |
| No log | 3.0 | 285 | 0.5298 | 0.5833 | 0.3684 | 0.4516 | 0.745 |
| No log | 4.0 | 380 | 0.5325 | 0.5789 | 0.3860 | 0.4632 | 0.745 |
| No log | 5.0 | 475 | 0.5452 | 0.4815 | 0.4561 | 0.4685 | 0.705 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
waboucay/camembert-base-finetuned-nli-rua_wl | 4f90617c185c71d36d6e834c3eeb0b030a569433 | 2022-05-02T13:54:59.000Z | [
"pytorch",
"camembert",
"text-classification",
"fr",
"transformers",
"nli"
] | text-classification | false | waboucay | null | waboucay/camembert-base-finetuned-nli-rua_wl | 4 | null | transformers | 19,511 | ---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 73.8 | 73.7 |
| test | 74.4 | 74.3 | |
Dizzykong/gpt2-quests | fbc4f76d91a6b52f164879559519db2d5af4f876 | 2022-05-02T19:01:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Dizzykong | null | Dizzykong/gpt2-quests | 4 | null | transformers | 19,512 | Entry not found |
caush/Clickbait4 | 20813401dc32f8608a62645ffe728079c3940df8 | 2022-05-02T20:39:40.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | caush | null | caush/Clickbait4 | 4 | null | transformers | 19,513 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Clickbait1
results: []
---
This model is a fine-tuned version of microsoft/Multilingual-MiniLM-L12-H384 on the Webis-Clickbait-17 dataset. It achieves the following results on the evaluation set:
Loss: 0.0261
The following list presents the current performances achieved by the participants. As primary evaluation measure, Mean Squared Error (MSE) with respect to the mean judgments of the annotators is used. Our result is 0,0261 for the MSE metric. We do not compute the other metrics. We try not to cheat using unknown data at the time of the challenge. We do not use k-fold cross validation techniques.
| team | MSE | F1 | Precision | Recall| Accuracy| Runtime |
|----- |----- |--- |-----------|-------|---------|-------- |
|goldfish | 0.024 | 0.741 | 0.739 | 0.742 | 0.876 | 16:20:21|
|caush | 0.026 | | | | | 00:11:00|
|monkfish | 0.026 | 0.694 | 0.785 | 0.622 | 0.870 | 03:41:35|
|dartfish | 0.027 | 0.706 | 0.733 | 0.681 | 0.865 | 00:47:07|
|torpedo19 | 0.03 | 0.677 | 0.755 | 0.614 | 0.861 | 00:52:44|
|albacore | 0.031 | 0.67 | 0.731 | 0.62 | 0.855 | 00:01:10|
|blobfish | 0.032 | 0.646 | 0.738 | 0.574 | 0.85 | 00:03:22|
|zingel | 0.033 | 0.683 | 0.719 | 0.65 | 0.856 | 00:03:27|
|anchovy | 0.034 | 0.68 | 0.717 | 0.645 | 0.855 | 00:07:20|
|ray | 0.034 | 0.684 | 0.691 | 0.677 | 0.851 | 00:29:28|
|icarfish | 0.035 | 0.621 | 0.768 | 0.522 | 0.849 | 01:02:57|
|emperor | 0.036 | 0.641 | 0.714 | 0.581 | 0.845 | 00:04:03|
|carpetshark | 0.036 | 0.638 | 0.728 | 0.568 | 0.847 | 00:08:05|
|electriceel | 0.038 | 0.588 | 0.727 | 0.493 | 0.835 | 01:04:54|
|arowana | 0.039 | 0.656 | 0.659 | 0.654 | 0.837 | 00:35:24|
|pineapplefish | 0.041 | 0.631 | 0.642 | 0.621 | 0.827 | 00:54:28|
|whitebait | 0.043 | 0.565 | 0.7 | 0.474 | 0.826 | 00:04:31| |
IsekaiMeta/dapprf | e72f5ec483c5c7d01a194b76b4423ad68361669f | 2022-05-03T00:46:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | IsekaiMeta | null | IsekaiMeta/dapprf | 4 | null | transformers | 19,514 | ---
tags:
- conversational
---
#dapprf |
pfactorial/checkpoint-22500-epoch-20 | 957e11d8bfa59d6997c56455e2b7d07e66d74a8d | 2022-05-03T05:48:55.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | pfactorial | null | pfactorial/checkpoint-22500-epoch-20 | 4 | null | transformers | 19,515 | this is a Questions generating mode
|
DioLiu/distilbert-base-uncased-finetuned-sst2-nostop | 0fece57fc926992ff4eb4bca8d53e9cae026eacd | 2022-05-03T06:43:45.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | DioLiu | null | DioLiu/distilbert-base-uncased-finetuned-sst2-nostop | 4 | null | transformers | 19,516 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2-nostop
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2-nostop
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0701
- Accuracy: 0.9888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.125 | 1.0 | 1116 | 0.0975 | 0.9743 |
| 0.0599 | 2.0 | 2232 | 0.0692 | 0.9840 |
| 0.0191 | 3.0 | 3348 | 0.0570 | 0.9871 |
| 0.0109 | 4.0 | 4464 | 0.0660 | 0.9882 |
| 0.0092 | 5.0 | 5580 | 0.0701 | 0.9888 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
DioLiu/distilbert-base-uncased-finetuned-sst2-moreShake | 82f1838fbb711d8b761965f0d4d6f6089dcf81f1 | 2022-05-03T10:10:55.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | DioLiu | null | DioLiu/distilbert-base-uncased-finetuned-sst2-moreShake | 4 | null | transformers | 19,517 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2-moreShake
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2-moreShake
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1864
- Accuracy: 0.9739
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1208 | 1.0 | 1957 | 0.1102 | 0.9661 |
| 0.0516 | 2.0 | 3914 | 0.1222 | 0.9704 |
| 0.0223 | 3.0 | 5871 | 0.1574 | 0.9690 |
| 0.0071 | 4.0 | 7828 | 0.1997 | 0.9706 |
| 0.0026 | 5.0 | 9785 | 0.1864 | 0.9739 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Someshfengde/distilbert-base-uncased-finetuned-emotion | 63aeb95a6eb962935a77912fc83b7b696f615d8a | 2022-05-03T12:13:25.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | Someshfengde | null | Someshfengde/distilbert-base-uncased-finetuned-emotion | 4 | null | transformers | 19,518 | Entry not found |
mrm8488/data2vec-text-base-finetuned-cola | 486aa5f042d7cb4e266378a945fa8c1e9a5cfe00 | 2022-05-03T15:28:38.000Z | [
"pytorch",
"tensorboard",
"data2vec-text",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | mrm8488 | null | mrm8488/data2vec-text-base-finetuned-cola | 4 | null | transformers | 19,519 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: data2vec-text-base-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5214716883534575
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data2vec-text-base-finetuned-cola
This model is a fine-tuned version of [facebook/data2vec-text-base](https://huggingface.co/facebook/data2vec-text-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5254
- Matthews Correlation: 0.5215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.160701759709141e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 30
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5632 | 1.0 | 535 | 0.5252 | 0.3869 |
| 0.4572 | 2.0 | 1070 | 0.5534 | 0.4758 |
| 0.3905 | 3.0 | 1605 | 0.4962 | 0.5259 |
| 0.3592 | 4.0 | 2140 | 0.5254 | 0.5215 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mrm8488/data2vec-text-base-finetuned-mrpc | b65b6a7ef1700ec0ef92a83be8e444539b82e009 | 2022-05-03T17:19:07.000Z | [
"pytorch",
"tensorboard",
"data2vec-text",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | mrm8488 | null | mrm8488/data2vec-text-base-finetuned-mrpc | 4 | null | transformers | 19,520 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: data2vec-text-base-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8627450980392157
- name: F1
type: f1
value: 0.8992805755395683
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data2vec-text-base-finetuned-mrpc
This model is a fine-tuned version of [facebook/data2vec-text-base](https://huggingface.co/facebook/data2vec-text-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4087
- Accuracy: 0.8627
- F1: 0.8993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.486061628311107e-06
- train_batch_size: 4
- eval_batch_size: 16
- seed: 19
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6197 | 1.0 | 917 | 0.4720 | 0.8039 | 0.8606 |
| 0.4763 | 2.0 | 1834 | 0.4087 | 0.8627 | 0.8993 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mrm8488/data2vec-text-base-finetuned-rte | ec0e0bb2690138b206860421c2ed2544f390ad13 | 2022-05-04T15:26:07.000Z | [
"pytorch",
"tensorboard",
"data2vec-text",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | mrm8488 | null | mrm8488/data2vec-text-base-finetuned-rte | 4 | null | transformers | 19,521 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: data2vec-text-base-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6209386281588448
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data2vec-text-base-finetuned-rte
This model is a fine-tuned version of [facebook/data2vec-text-base](https://huggingface.co/facebook/data2vec-text-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6670
- Accuracy: 0.6209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 156 | 0.7091 | 0.4729 |
| No log | 2.0 | 312 | 0.6893 | 0.5271 |
| No log | 3.0 | 468 | 0.6670 | 0.6209 |
| 0.6919 | 4.0 | 624 | 0.6740 | 0.5921 |
| 0.6919 | 5.0 | 780 | 0.6644 | 0.6101 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ncthuan/vi-distilled-msmarco-MiniLM-L12-cos-v5 | 10f4a7f8f56fe415325e12fff8be9681f5c89180 | 2022-05-04T12:52:08.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2004.09813",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:mit"
] | sentence-similarity | false | ncthuan | null | ncthuan/vi-distilled-msmarco-MiniLM-L12-cos-v5 | 4 | null | sentence-transformers | 19,522 | ---
license: mit
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a Vietnamese [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like questions answering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
The thesis will be available on [https://github.com/ncthuan/uet-qa](https://github.com/ncthuan/uet-qa) with evaluation results in chapter 4.
paraphrase-multilingual-minilm: 75 recall@10, 49 MRR@10
this model: 85 recall@10, 58 MRR@10
## Training
It was distilled using English-Vietnamese parallel data with this [training script](https://github.com/ncthuan/uet-qa/blob/main/scripts/train/make_multilingual.py)
that follows the work of [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://www.sbert.net/examples/training/multilingual/README.html)
teacher: msmarco-MiniLM-L12-cos-v5
student: paraphrase-multilingual-minilm-L12-v2
Data: PhoMT, MKQA, MLQA, XQuAD
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40148 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 2000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2000,
"weight_decay": 0.005
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
```
@inproceedings{reimers-2020-multilingual-sentence-bert,
title = "Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2020",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2004.09813",
}
@article{thuan2022-uetqa,
title={{Extractive question answering system on regulations for University of Engineering and Technology}},
author={Nguyen, Thuan},
journal={Undergraduate Thesis, University of Engineering and Technology, Vietnam National University Hanoi},
year={2022}
}
``` |
chebmarcel/modern_nature | 72a7fa4859e599e9b99229c00fbe71a6902a3bf9 | 2022-05-04T11:10:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | chebmarcel | null | chebmarcel/modern_nature | 4 | null | transformers | 19,523 | Entry not found |
domenicrosati/question_converter-3b | ae6b54ed920e5328bae9c402d56e0998cb5a0bf3 | 2022-05-10T17:05:23.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:domenicrosati/QA2D",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | domenicrosati | null | domenicrosati/question_converter-3b | 4 | 1 | transformers | 19,524 | ---
language:
- en
tags:
- text2text-generation
datasets:
- domenicrosati/QA2D
widget:
- text: "Where in the world is Carmen Sandiego. She is in Abruzzo"
example_title: "Where is Carmen Sandiego?"
- text: "Halifax is a city in which province. Nova Scotia"
example_title: "A Halifact"
---
# Question-Answer to Statement Converter
A question answer pair to statement converter from https://github.com/jifan-chen/QA-Verification-Via-NLI
See:
```
@article{chen2021can,
title={Can NLI Models Verify QA Systems' Predictions?},
author={Chen, Jifan and Choi, Eunsol and Durrett, Greg},
journal={EMNLP Findings},
year={2021}
}
```
**Note:** I am not the maintainer or orginal author just keeping it here to use huggingface APIs to produce statements from question answer pair for downstream applications.
## TL;DR:
We fine-tune a seq2seq model,
T5-3B (Raffel et al., 2020), using the \\((a, q, d)\\) pairs
annotated by Demszky et al. (2018).
Where a is answer, q is question, and d is declerative sentence (i.e. a statement).
See Appendex B.2 of Chen et al. for more.
## Usage
The prompt should be `{question} {seperator} {answer}` where the seperator is `</s>`.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained('domenicrosati/question_converter-3b')
model = AutoModelForSeq2SeqLM.from_pretrained('domenicrosati/question_converter-3b')
question = "Where in the world is Carmen Sandiego?"
answer = "She is in Abruzzo"
prompt = f'{question} </s> {answer}'
input_ids = tokenizer(prompt, return_tensors='pt').input_ids
output_ids = model.generate(input_ids)
responses = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
```
> `['Carmen Sandiego is in Abruzzo.']`
|
Yanhao/simcse-roberta-large | 51f382a898d56ba3c4314ae259fff949498a2173 | 2022-05-04T21:55:59.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | Yanhao | null | Yanhao/simcse-roberta-large | 4 | null | transformers | 19,525 | Entry not found |
YeRyeongLee/bert-base-uncased-finetuned-small-0505 | 7825fc10aa00d5908959a5f1d62ab44d56b1000f | 2022-05-04T22:54:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | YeRyeongLee | null | YeRyeongLee/bert-base-uncased-finetuned-small-0505 | 4 | null | transformers | 19,526 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned-small-0505
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-small-0505
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8649
- Accuracy: 0.1818
- F1: 0.1182
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 13 | 1.8337 | 0.1818 | 0.0559 |
| No log | 2.0 | 26 | 1.8559 | 0.2727 | 0.1414 |
| No log | 3.0 | 39 | 1.8488 | 0.1818 | 0.1010 |
| No log | 4.0 | 52 | 1.8649 | 0.1818 | 0.1182 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
YeRyeongLee/mental-bert-base-uncased-finetuned-0505 | 78a2f58e03bdd1b1c7cf77fbdfd5cee5339e85a5 | 2022-05-05T04:19:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | YeRyeongLee | null | YeRyeongLee/mental-bert-base-uncased-finetuned-0505 | 4 | null | transformers | 19,527 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: mental-bert-base-uncased-finetuned-0505
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mental-bert-base-uncased-finetuned-0505
This model is a fine-tuned version of [mental/mental-bert-base-uncased](https://huggingface.co/mental/mental-bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4195
- Accuracy: 0.9181
- F1: 0.9182
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 1373 | 0.2846 | 0.9124 | 0.9119 |
| No log | 2.0 | 2746 | 0.3468 | 0.9132 | 0.9129 |
| No log | 3.0 | 4119 | 0.3847 | 0.9189 | 0.9192 |
| No log | 4.0 | 5492 | 0.4195 | 0.9181 | 0.9182 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
CarlCochet/trajectory-transformer-halfcheetah-medium-replay-v2 | b23fd1dcdc38aa1d5aacdbbdbfe93287be9e24a1 | 2022-05-12T17:02:23.000Z | [
"pytorch",
"trajectory_transformer",
"feature-extraction",
"transformers",
"license:mit"
] | feature-extraction | false | CarlCochet | null | CarlCochet/trajectory-transformer-halfcheetah-medium-replay-v2 | 4 | null | transformers | 19,528 | ---
license: mit
---
|
CarlCochet/trajectory-transformer-walker2d-medium-v2 | ff7ed72745a4210be17b9d207543ec34c42f8037 | 2022-05-12T17:08:05.000Z | [
"pytorch",
"trajectory_transformer",
"feature-extraction",
"transformers",
"license:mit"
] | feature-extraction | false | CarlCochet | null | CarlCochet/trajectory-transformer-walker2d-medium-v2 | 4 | null | transformers | 19,529 | ---
license: mit
---
|
PSW/low_resource_percent1_seed42 | 057fb7075d8669a0bf8195eeabf2b96aa5274566 | 2022-05-05T09:47:08.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/low_resource_percent1_seed42 | 4 | null | transformers | 19,530 | Entry not found |
benjamin/gpt2-wechsel-uyghur | a4cee9a802b209925853b269304704b49477a824 | 2022-05-05T14:24:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ug",
"arxiv:2112.06598",
"transformers",
"license:mit"
] | text-generation | false | benjamin | null | benjamin/gpt2-wechsel-uyghur | 4 | null | transformers | 19,531 | ---
language: ug
license: mit
---
# gpt2-wechsel-uyghur
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://arxiv.org/abs/2112.06598
## Performance
| Model | PPL |
|---|---|
| `gpt2-wechsel-sundanese` | **111.72** |
| `gpt2` (retrained from scratch) | 149.46 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-scottish-gaelic` | **16.43** |
| `gpt2` (retrained from scratch) | 19.53 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-uyghur` | **34.33** |
| `gpt2` (retrained from scratch) | 42.82 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-malagasy` | **14.01** |
| `gpt2` (retrained from scratch) | 15.93 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@misc{minixhofer2021wechsel,
title={WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models},
author={Benjamin Minixhofer and Fabian Paischer and Navid Rekabsaz},
year={2021},
eprint={2112.06598},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
JoMart/distilbert-base-uncased-finetuned-cola | bf0728bfb9542d95c333ec2298774fbe6fccc3fd | 2022-05-05T13:56:48.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | JoMart | null | JoMart/distilbert-base-uncased-finetuned-cola | 4 | null | transformers | 19,532 | Entry not found |
PSW/low_resource_percent20_maxsimins_seed1 | 8acee22fc56da26d1c3f8e7dd93013bd11ef0bca | 2022-05-05T15:26:39.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/low_resource_percent20_maxsimins_seed1 | 4 | null | transformers | 19,533 | Entry not found |
PSW/low_resource_percent20_seed27 | a203949dff961204c44de0bf92f5ddc64db1c50d | 2022-05-05T20:17:03.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/low_resource_percent20_seed27 | 4 | null | transformers | 19,534 | Entry not found |
Khalsuu/english-filipino-wav2vec2-l-xls-r-test-04 | 805103a23454e1ffce0eea8c915c2ac49ae9a71d | 2022-05-06T03:36:02.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:filipino_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Khalsuu | null | Khalsuu/english-filipino-wav2vec2-l-xls-r-test-04 | 4 | null | transformers | 19,535 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- filipino_voice
model-index:
- name: english-filipino-wav2vec2-l-xls-r-test-04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# english-filipino-wav2vec2-l-xls-r-test-04
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the filipino_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0713
- Wer: 0.5078
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.2131 | 2.09 | 400 | 0.7100 | 0.6832 |
| 0.6539 | 4.19 | 800 | 0.8307 | 0.6602 |
| 0.5081 | 6.28 | 1200 | 0.7120 | 0.6297 |
| 0.42 | 8.38 | 1600 | 0.7309 | 0.6299 |
| 0.3482 | 10.47 | 2000 | 0.7665 | 0.6148 |
| 0.293 | 12.57 | 2400 | 0.7091 | 0.5840 |
| 0.265 | 14.66 | 2800 | 0.8170 | 0.6102 |
| 0.2294 | 16.75 | 3200 | 0.9715 | 0.6216 |
| 0.1872 | 18.85 | 3600 | 0.8516 | 0.5837 |
| 0.1644 | 20.94 | 4000 | 0.8408 | 0.5767 |
| 0.1495 | 23.04 | 4400 | 0.9188 | 0.5717 |
| 0.1276 | 25.13 | 4800 | 1.0149 | 0.5451 |
| 0.116 | 27.23 | 5200 | 1.0220 | 0.5683 |
| 0.1017 | 29.32 | 5600 | 0.9319 | 0.5253 |
| 0.0899 | 31.41 | 6000 | 0.9949 | 0.5435 |
| 0.0861 | 33.51 | 6400 | 1.1029 | 0.5467 |
| 0.0766 | 35.6 | 6800 | 1.0219 | 0.5193 |
| 0.065 | 37.7 | 7200 | 1.0836 | 0.5214 |
| 0.0588 | 39.79 | 7600 | 1.0713 | 0.5078 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
xingqiang/nezha-zh-address-match-base | ed14db994b9841bd91e9cda59df93a4641f55ad2 | 2022-05-06T03:06:38.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | xingqiang | null | xingqiang/nezha-zh-address-match-base | 4 | null | transformers | 19,536 | Entry not found |
kneis/distilbert-sentiment-adversarial | 6c8cbdc7d2e61b58b959013bc270dc63657769eb | 2022-05-06T03:25:43.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | kneis | null | kneis/distilbert-sentiment-adversarial | 4 | null | transformers | 19,537 | Entry not found |
nikhilmatta/NewsBiasClassifier | 55f49900e27d0901636b0d025b5d5f17179cbe98 | 2022-05-06T03:52:53.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | nikhilmatta | null | nikhilmatta/NewsBiasClassifier | 4 | null | transformers | 19,538 | Entry not found |
BAAI/GLM | b3046831fe5a497bbddcbf9e86179f8804bf8040 | 2022-07-20T08:00:47.000Z | [
"pytorch",
"transformers"
] | null | false | BAAI | null | BAAI/GLM | 4 | null | transformers | 19,539 | Entry not found |
Wakaka/bert-finetuned-mrpc | 67c43ed5e9c13430204a107b620956be9627e092 | 2022-05-06T10:01:01.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Wakaka | null | Wakaka/bert-finetuned-mrpc | 4 | null | transformers | 19,540 | Entry not found |
crabz/exp6 | eb0ce7de3dc94d8d7ef4e1dfa7588b1cd44302f7 | 2022-05-06T10:08:16.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | crabz | null | crabz/exp6 | 4 | null | transformers | 19,541 | Entry not found |
chrishistewandb/finetuning-sentiment-model-3000-samples | 2260269385be34a03831ed0f7668f25de410fca5 | 2022-05-06T21:16:00.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | chrishistewandb | null | chrishistewandb/finetuning-sentiment-model-3000-samples | 4 | null | transformers | 19,542 | Entry not found |
armanc/affiliations-roberta-orig-83K-loss-0.102 | b16ac22fb83a97fa13932511b545c5c73a577a30 | 2022-05-06T23:31:47.000Z | [
"pytorch",
"transformers"
] | null | false | armanc | null | armanc/affiliations-roberta-orig-83K-loss-0.102 | 4 | null | transformers | 19,543 | Entry not found |
Siyam/Dansk-wav2vec21 | 5dad47bf35f9f431be51c0887bb56f71fd26f19a | 2022-05-07T18:43:05.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Siyam | null | Siyam/Dansk-wav2vec21 | 4 | null | transformers | 19,544 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: Dansk-wav2vec21
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Dansk-wav2vec21
This model is a fine-tuned version of [Siyam/SKYLy](https://huggingface.co/Siyam/SKYLy) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8025
- Wer: 0.4057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0563 | 4.26 | 400 | 0.7887 | 0.4560 |
| 0.0756 | 8.51 | 800 | 0.7519 | 0.4444 |
| 0.0497 | 12.77 | 1200 | 0.7979 | 0.4256 |
| 0.0335 | 17.02 | 1600 | 0.8025 | 0.4057 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.10.3
|
SebastianS/marian-finetuned-kde4-en-to-fr-accelerate | de587a5b5f753055e0c5d96a008c8539dc93dabe | 2022-05-07T20:25:20.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SebastianS | null | SebastianS/marian-finetuned-kde4-en-to-fr-accelerate | 4 | null | transformers | 19,545 | Entry not found |
KoichiYasuoka/roberta-base-coptic-upos | a7fc4d1f0f0bd23e8a347ccc30f3bfd284576912 | 2022-05-08T05:18:20.000Z | [
"pytorch",
"roberta",
"token-classification",
"cop",
"dataset:universal_dependencies",
"transformers",
"coptic",
"pos",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/roberta-base-coptic-upos | 4 | null | transformers | 19,546 | ---
language:
- "cop"
tags:
- "coptic"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "ⲧⲉⲛⲟⲩⲇⲉⲛ̄ⲟⲩⲟⲉⲓⲛϩ︤ⲙ︥ⲡϫⲟⲉⲓⲥ·"
- text: "ⲙⲟⲟϣⲉϩⲱⲥϣⲏⲣⲉⲙ̄ⲡⲟⲩⲟⲉⲓⲛ·"
---
# roberta-base-coptic-upos
## Model Description
This is a RoBERTa model pre-trained with [UD_Coptic](https://universaldependencies.org/cop/) for POS-tagging and dependency-parsing, derived from [roberta-base-coptic](https://huggingface.co/KoichiYasuoka/roberta-base-coptic). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-coptic-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-coptic-upos")
```
or
```
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-base-coptic-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
theojolliffe/distill-pegasus-cnn-arxiv-pubmed-v3-e8 | 471186f4f9932d23061bce38c612c04115d37ad8 | 2022-05-08T09:33:22.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/distill-pegasus-cnn-arxiv-pubmed-v3-e8 | 4 | null | transformers | 19,547 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distill-pegasus-cnn-arxiv-pubmed-v3-e8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distill-pegasus-cnn-arxiv-pubmed-v3-e8
This model is a fine-tuned version of [theojolliffe/distill-pegasus-cnn-arxiv-pubmed](https://huggingface.co/theojolliffe/distill-pegasus-cnn-arxiv-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6844
- Rouge1: 49.0081
- Rouge2: 30.6784
- Rougel: 33.5258
- Rougelsum: 45.5354
- Gen Len: 125.6852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.7633 | 1.0 | 795 | 2.1211 | 48.9615 | 30.3509 | 33.7359 | 44.508 | 124.7963 |
| 2.3051 | 2.0 | 1590 | 1.9464 | 48.6806 | 30.452 | 34.2187 | 44.6379 | 124.6296 |
| 2.2244 | 3.0 | 2385 | 1.8294 | 48.9739 | 30.6717 | 33.605 | 45.0942 | 125.3704 |
| 2.0733 | 4.0 | 3180 | 1.7769 | 49.0049 | 30.8354 | 33.6965 | 44.8603 | 125.7037 |
| 1.9759 | 5.0 | 3975 | 1.7192 | 50.3946 | 32.1072 | 34.5453 | 46.4493 | 125.5741 |
| 1.9478 | 6.0 | 4770 | 1.7037 | 49.4631 | 31.654 | 34.4601 | 46.2376 | 125.5185 |
| 1.9016 | 7.0 | 5565 | 1.6874 | 48.2641 | 29.6354 | 33.1059 | 44.8436 | 125.6852 |
| 1.8882 | 8.0 | 6360 | 1.6844 | 49.0081 | 30.6784 | 33.5258 | 45.5354 | 125.6852 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
rahulacj/mbart-large-cc25-finetuned-hi-to-en-v2 | 6a8e23b442401945c45e68cd8982cb4a170ed9b1 | 2022-05-10T23:37:59.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | rahulacj | null | rahulacj/mbart-large-cc25-finetuned-hi-to-en-v2 | 4 | null | transformers | 19,548 | ---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-large-cc25-finetuned-hi-to-en-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-cc25-finetuned-hi-to-en-v2
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8027
- Bleu: 33.4814
- Gen Len: 21.8974
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.8971 | 1.0 | 3955 | 1.6015 | 19.3557 | 43.7594 |
| 1.3266 | 2.0 | 7910 | 1.4917 | 19.1404 | 35.3155 |
| 0.9906 | 3.0 | 11865 | 1.5354 | 26.999 | 26.7497 |
| 0.6987 | 4.0 | 15820 | 1.6457 | 31.9572 | 23.4565 |
| 0.5073 | 5.0 | 19775 | 1.8544 | 34.1169 | 22.1507 |
| 0.3554 | 6.0 | 23730 | 2.0985 | 34.0746 | 22.2396 |
| 0.2423 | 7.0 | 27685 | 2.2534 | 33.2205 | 22.2184 |
| 0.1918 | 8.0 | 31640 | 2.4014 | 32.2001 | 22.635 |
| 0.1423 | 9.0 | 35595 | 2.5067 | 32.4074 | 22.8716 |
| 0.1105 | 10.0 | 39550 | 2.5618 | 33.1965 | 22.5905 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Jeevesh8/bert_ft_cola-1 | d11f92b68d1353021c372130fed9a31bb4160612 | 2022-05-09T08:59:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-1 | 4 | null | transformers | 19,549 | Entry not found |
guhuawuli/distilbert-base-uncased-finetuned-ner | 4618777dd0d2e6bf2752a8ec9219b97a3d754c12 | 2022-05-09T15:03:24.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | guhuawuli | null | guhuawuli/distilbert-base-uncased-finetuned-ner | 4 | null | transformers | 19,550 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.8982049036777583
- name: Recall
type: recall
value: 0.9179997762613268
- name: F1
type: f1
value: 0.9079944674965422
- name: Accuracy
type: accuracy
value: 0.979427137115351
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0729
- Precision: 0.8982
- Recall: 0.9180
- F1: 0.9080
- Accuracy: 0.9794
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 220 | 0.1036 | 0.8607 | 0.8797 | 0.8701 | 0.9727 |
| No log | 2.0 | 440 | 0.0762 | 0.8912 | 0.9131 | 0.9020 | 0.9783 |
| 0.2005 | 3.0 | 660 | 0.0729 | 0.8982 | 0.9180 | 0.9080 | 0.9794 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0a0+3fd9dcf
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Jeevesh8/bert_ft_cola-2 | ea09f156ae940d48ef835899f37c90602fd9145a | 2022-05-09T13:55:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-2 | 4 | null | transformers | 19,551 | Entry not found |
Jeevesh8/bert_ft_cola-3 | 8be3c295e3846e88ebacbd3a4673aa920052e3a0 | 2022-05-09T13:56:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-3 | 4 | null | transformers | 19,552 | Entry not found |
Jeevesh8/bert_ft_cola-4 | 0d4f759956a335005083f1b4a807ff6c105a3460 | 2022-05-09T13:56:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-4 | 4 | null | transformers | 19,553 | Entry not found |
Jeevesh8/bert_ft_cola-5 | cc01f8dee424dbe6753b576a5afb3f87418b10de | 2022-05-09T13:57:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-5 | 4 | null | transformers | 19,554 | Entry not found |
Jeevesh8/bert_ft_cola-6 | e23c85995f56724e9fd490f863e377d40da2831e | 2022-05-09T13:58:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-6 | 4 | null | transformers | 19,555 | Entry not found |
Jeevesh8/bert_ft_cola-7 | cc33ddda3f655b75963c7f7fc8dab462a811d0fb | 2022-05-09T13:58:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-7 | 4 | null | transformers | 19,556 | Entry not found |
Jeevesh8/bert_ft_cola-8 | 090d060554b3777c2e34d7ca1362d3e43fff851a | 2022-05-09T13:59:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-8 | 4 | null | transformers | 19,557 | Entry not found |
Jeevesh8/bert_ft_cola-9 | 0f711801ab099a3819e0e7ded470db4a49762f04 | 2022-05-09T14:00:08.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-9 | 4 | null | transformers | 19,558 | Entry not found |
Jeevesh8/bert_ft_cola-10 | 2eadeaeda49e0e7cbb733f8a3adfb9c8646ff261 | 2022-05-09T14:00:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-10 | 4 | null | transformers | 19,559 | Entry not found |
Jeevesh8/bert_ft_cola-13 | 0155bb2e2ee8f974e92073285514fbe877f9cea7 | 2022-05-09T14:02:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-13 | 4 | null | transformers | 19,560 | Entry not found |
Jeevesh8/bert_ft_cola-14 | 9444135c043c5fde981891760c2cfeb6c208816e | 2022-05-09T14:03:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-14 | 4 | null | transformers | 19,561 | Entry not found |
Jeevesh8/bert_ft_cola-15 | f53de774aba2e39810a625ce609ebb0a15338120 | 2022-05-09T14:04:07.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-15 | 4 | null | transformers | 19,562 | Entry not found |
Jeevesh8/bert_ft_cola-16 | 915f6e733fd7c5354ea159354aa93d7601027af6 | 2022-05-09T14:04:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-16 | 4 | null | transformers | 19,563 | Entry not found |
Jeevesh8/bert_ft_cola-17 | b35642730cfbf2463ec7852e37a2d11c8a94409d | 2022-05-09T14:05:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-17 | 4 | null | transformers | 19,564 | Entry not found |
Jeevesh8/bert_ft_cola-18 | 81d4722aecb2cbd1a97d4b59746fea92b7047bb0 | 2022-05-09T14:06:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-18 | 4 | null | transformers | 19,565 | Entry not found |
Jeevesh8/bert_ft_cola-19 | 7c2a159f19acb0b9a5ce7cf83cfc2568a23996b5 | 2022-05-09T14:06:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-19 | 4 | null | transformers | 19,566 | Entry not found |
Jeevesh8/bert_ft_cola-22 | 83a4a56f9454978c759560063b7c0719c24f1bc5 | 2022-05-09T14:08:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-22 | 4 | null | transformers | 19,567 | Entry not found |
Jeevesh8/bert_ft_cola-23 | 45183fb9296b4f69ddde028a548c685516f7c824 | 2022-05-09T14:09:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-23 | 4 | null | transformers | 19,568 | Entry not found |
Jeevesh8/bert_ft_cola-24 | c978c2fec2a41ca25633073b3322fa88b8b80ccc | 2022-05-09T14:10:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-24 | 4 | null | transformers | 19,569 | Entry not found |
Jeevesh8/bert_ft_cola-25 | efb773b303a9d015e769196afcba1c3442031a20 | 2022-05-09T14:10:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-25 | 4 | null | transformers | 19,570 | Entry not found |
Jeevesh8/bert_ft_cola-26 | 3b164098b36edb49409136c18f314147f7c82a72 | 2022-05-09T14:11:20.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-26 | 4 | null | transformers | 19,571 | Entry not found |
Jeevesh8/bert_ft_cola-27 | 9f71d7a615f2862ef52ce1f6c7acbc883acd8fe2 | 2022-05-09T14:11:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-27 | 4 | null | transformers | 19,572 | Entry not found |
Jeevesh8/bert_ft_cola-28 | 6993d8408b312fe90c31775b698caf27e588b064 | 2022-05-09T14:12:38.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-28 | 4 | null | transformers | 19,573 | Entry not found |
Jeevesh8/bert_ft_cola-29 | 2559284ef3144367e3a06993696a199bb05c5df8 | 2022-05-09T14:13:17.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-29 | 4 | null | transformers | 19,574 | Entry not found |
Jeevesh8/bert_ft_cola-30 | f18ad70db1a29bd8db9dcb69cdf1afea2caed76f | 2022-05-09T14:13:58.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-30 | 4 | null | transformers | 19,575 | Entry not found |
Jeevesh8/bert_ft_cola-31 | fa10068d341c59595d6c913e0cf15c902eebd3db | 2022-05-09T14:14:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-31 | 4 | null | transformers | 19,576 | Entry not found |
Jeevesh8/bert_ft_cola-32 | 730dda0a30b21cff0d074e20486d96822d116bf9 | 2022-05-09T14:15:14.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-32 | 4 | null | transformers | 19,577 | Entry not found |
Jeevesh8/bert_ft_cola-33 | f3a7444b5eb17af273d63a1ce3a76b94c462ed4d | 2022-05-09T14:15:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-33 | 4 | null | transformers | 19,578 | Entry not found |
Jeevesh8/bert_ft_cola-34 | 6945b48c18a6d5eaf6b8f9c63056258002f087b6 | 2022-05-09T14:16:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-34 | 4 | null | transformers | 19,579 | Entry not found |
Jeevesh8/bert_ft_cola-35 | 86290284751a1b33307f32d1ee5e4dc5028cf2d2 | 2022-05-09T14:17:16.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-35 | 4 | null | transformers | 19,580 | Entry not found |
Jeevesh8/bert_ft_cola-36 | 08a351e48636d2bedcbe1eb9e3fedc21d9df23fc | 2022-05-09T14:17:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-36 | 4 | null | transformers | 19,581 | Entry not found |
Jeevesh8/bert_ft_cola-37 | 860d942dcf44ee920671ff3d85962df9b4235ac0 | 2022-05-09T14:18:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-37 | 4 | null | transformers | 19,582 | Entry not found |
Jeevesh8/bert_ft_cola-38 | 93fe5dfa93311a8e8504ea8db569e1c5f500033e | 2022-05-09T14:19:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-38 | 4 | null | transformers | 19,583 | Entry not found |
Jeevesh8/bert_ft_cola-40 | f1cb19fe20672fe80e3b27e0bbdc8298e1663594 | 2022-05-09T14:20:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-40 | 4 | null | transformers | 19,584 | Entry not found |
Jeevesh8/bert_ft_cola-41 | a30cee5e7a447518c52879a3c7186a52b143236a | 2022-05-09T14:21:13.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-41 | 4 | null | transformers | 19,585 | Entry not found |
Jeevesh8/bert_ft_cola-42 | f2805dbc5c0bd78ce000e449707a0e72cc05c42d | 2022-05-09T14:22:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-42 | 4 | null | transformers | 19,586 | Entry not found |
Jeevesh8/bert_ft_cola-43 | 31a0af5a593cc128d938b3cc275b9f125f3b55e6 | 2022-05-09T14:22:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-43 | 4 | null | transformers | 19,587 | Entry not found |
Jeevesh8/bert_ft_cola-44 | 0d003f47466b501c212df076ebac772b19d437f1 | 2022-05-09T14:23:26.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-44 | 4 | null | transformers | 19,588 | Entry not found |
Jeevesh8/bert_ft_cola-46 | 3f0eb8fdca86ef1f1a4cd27a5538cdf5f5821413 | 2022-05-09T14:24:44.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-46 | 4 | null | transformers | 19,589 | Entry not found |
Jeevesh8/bert_ft_cola-47 | 8e65522b2478ced24ab1f21885c91cdba9b29934 | 2022-05-09T14:25:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-47 | 4 | null | transformers | 19,590 | Entry not found |
Jeevesh8/bert_ft_cola-48 | 7c223292912eadd4b5f493fc61d588d942f0345d | 2022-05-09T14:26:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-48 | 4 | null | transformers | 19,591 | Entry not found |
Jeevesh8/bert_ft_cola-49 | c224ba7d793c4def8698157a9ea8cdc74fd9093d | 2022-05-09T14:26:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-49 | 4 | null | transformers | 19,592 | Entry not found |
Jeevesh8/bert_ft_cola-50 | f69b6d1dd42923b1336ff08ea754eea0010b5c2e | 2022-05-09T14:27:20.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-50 | 4 | null | transformers | 19,593 | Entry not found |
Jeevesh8/bert_ft_cola-51 | 7ee3b17125719d52670f25bd0c0a9f35b90bb029 | 2022-05-09T14:28:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-51 | 4 | null | transformers | 19,594 | Entry not found |
Jeevesh8/bert_ft_cola-52 | 7e7fad69013037867b958abdf857dfa1ec4ee793 | 2022-05-09T14:28:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-52 | 4 | null | transformers | 19,595 | Entry not found |
Jeevesh8/bert_ft_cola-53 | c73259967a1db7d7943c7c4142c6765046031eaf | 2022-05-09T14:29:19.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-53 | 4 | null | transformers | 19,596 | Entry not found |
Jeevesh8/bert_ft_cola-54 | aa7c9f4a8abc4115ca8a748275cd227b995e8290 | 2022-05-09T14:30:01.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-54 | 4 | null | transformers | 19,597 | Entry not found |
Jeevesh8/bert_ft_cola-56 | 238afdf11c0632f64ab9e2db8ae5ecb3bf76df12 | 2022-05-09T14:31:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-56 | 4 | null | transformers | 19,598 | Entry not found |
Jeevesh8/bert_ft_cola-57 | 59b715f593ca4bf2ef8e98f2805c61100f5cc236 | 2022-05-09T14:32:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_cola-57 | 4 | null | transformers | 19,599 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.