modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
DanielCano/spanish_news_classification_headlines_untrained | 953b1817f717fe1716b19c600d55c11fccbbadf1 | 2022-05-30T10:27:23.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | DanielCano | null | DanielCano/spanish_news_classification_headlines_untrained | 4 | null | transformers | 20,000 | Entry not found |
huggingtweets/ultrafungi | b3402604e745662a3803b4c76ed2ed8a54bcc9f0 | 2022-05-30T11:46:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/ultrafungi | 4 | null | transformers | 20,001 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1522479920714240001/wi1LPddl_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">sydney</div>
<div style="text-align: center; font-size: 14px;">@ultrafungi</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from sydney.
| Data | sydney |
| --- | --- |
| Tweets downloaded | 125 |
| Retweets | 35 |
| Short tweets | 9 |
| Tweets kept | 81 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wk3rd28k/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ultrafungi's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3cil1w2p) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3cil1w2p/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ultrafungi')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
apugachev/bert-langdetect | c68a5ac80504f6c798debe99716fb3fae712aed4 | 2022-05-30T20:10:16.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"transformers"
] | text-classification | false | apugachev | null | apugachev/bert-langdetect | 4 | null | transformers | 20,002 | Entry not found |
YeRyeongLee/xlm-roberta-base-finetuned-removed-0530 | c3277d5e2e2ad60f29695c20eee064ed1c28eb6c | 2022-05-31T08:31:07.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | YeRyeongLee | null | YeRyeongLee/xlm-roberta-base-finetuned-removed-0530 | 4 | null | transformers | 20,003 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-finetuned-removed-0530
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-removed-0530
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9944
- Accuracy: 0.8717
- F1: 0.8719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 3180 | 0.6390 | 0.7899 | 0.7852 |
| No log | 2.0 | 6360 | 0.5597 | 0.8223 | 0.8230 |
| No log | 3.0 | 9540 | 0.5177 | 0.8462 | 0.8471 |
| No log | 4.0 | 12720 | 0.5813 | 0.8642 | 0.8647 |
| No log | 5.0 | 15900 | 0.7324 | 0.8557 | 0.8568 |
| No log | 6.0 | 19080 | 0.7589 | 0.8626 | 0.8634 |
| No log | 7.0 | 22260 | 0.7958 | 0.8752 | 0.8751 |
| 0.3923 | 8.0 | 25440 | 0.9177 | 0.8651 | 0.8653 |
| 0.3923 | 9.0 | 28620 | 1.0188 | 0.8673 | 0.8671 |
| 0.3923 | 10.0 | 31800 | 0.9944 | 0.8717 | 0.8719 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.12.1
|
huggingtweets/skeptikons | 36dda3f6bf58a1d2ebb67d104b746300071eb861 | 2022-07-10T09:36:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/skeptikons | 4 | null | transformers | 20,004 | ---
language: en
thumbnail: http://www.huggingtweets.com/skeptikons/1657445759728/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1369269405411139584/B6xOW78i_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Eddie</div>
<div style="text-align: center; font-size: 14px;">@skeptikons</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Eddie.
| Data | Eddie |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 150 |
| Short tweets | 489 |
| Tweets kept | 2610 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2v2w1ly8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @skeptikons's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/31cyn37j) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/31cyn37j/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/skeptikons')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
theojolliffe/bart-cnn-science-v3-e6 | 43dd92cea58f406ea4d8056c98c2a5f8bf3e0fd9 | 2022-05-31T12:32:01.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-cnn-science-v3-e6 | 4 | null | transformers | 20,005 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-science-v3-e6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-science-v3-e6
This model is a fine-tuned version of [theojolliffe/bart-cnn-science](https://huggingface.co/theojolliffe/bart-cnn-science) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8057
- Rouge1: 53.7462
- Rouge2: 34.9622
- Rougel: 37.5676
- Rougelsum: 51.0619
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 0.9961 | 52.632 | 32.8104 | 35.0789 | 50.3747 | 142.0 |
| 1.174 | 2.0 | 796 | 0.8565 | 52.8308 | 32.7064 | 34.6605 | 50.3348 | 142.0 |
| 0.7073 | 3.0 | 1194 | 0.8322 | 52.2418 | 32.8677 | 36.1806 | 49.6297 | 141.5556 |
| 0.4867 | 4.0 | 1592 | 0.8137 | 53.5537 | 34.5404 | 36.7194 | 50.8394 | 142.0 |
| 0.4867 | 5.0 | 1990 | 0.7996 | 53.4959 | 35.1017 | 37.5143 | 50.9972 | 141.8704 |
| 0.3529 | 6.0 | 2388 | 0.8057 | 53.7462 | 34.9622 | 37.5676 | 51.0619 | 142.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
dexay/reDs3 | dc5429e72b99e166f418ba7c43b140706114f290 | 2022-05-31T13:01:50.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | dexay | null | dexay/reDs3 | 4 | null | transformers | 20,006 | Entry not found |
PontifexMaximus/opus-mt-ur-en-finetuned-fa-to-en | fd8ed0e426372c9b46605c75906a5617cd7a4ca5 | 2022-06-01T16:30:17.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PontifexMaximus | null | PontifexMaximus/opus-mt-ur-en-finetuned-fa-to-en | 4 | null | transformers | 20,007 | Entry not found |
YeRyeongLee/bert-base-uncased-finetuned-filtered-0601 | 9dbf9d817e3db925b381721a7b43f86285579920 | 2022-06-01T13:29:32.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | YeRyeongLee | null | YeRyeongLee/bert-base-uncased-finetuned-filtered-0601 | 4 | null | transformers | 20,008 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned-filtered-0601
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-filtered-0601
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1152
- Accuracy: 0.9814
- F1: 0.9815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 3180 | 0.1346 | 0.9664 | 0.9665 |
| No log | 2.0 | 6360 | 0.1352 | 0.9748 | 0.9749 |
| No log | 3.0 | 9540 | 0.1038 | 0.9808 | 0.9808 |
| No log | 4.0 | 12720 | 0.1152 | 0.9814 | 0.9815 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.12.1
|
YeRyeongLee/bert-base-uncased-finetuned-filtered-0602 | bea7f86af00431d7f5ce9e5a7034534158351416 | 2022-06-01T16:16:58.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | YeRyeongLee | null | YeRyeongLee/bert-base-uncased-finetuned-filtered-0602 | 4 | null | transformers | 20,009 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned-filtered-0602
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-filtered-0602
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1959
- Accuracy: 0.9783
- F1: 0.9783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.1777 | 1.0 | 3180 | 0.2118 | 0.9563 | 0.9566 |
| 0.1409 | 2.0 | 6360 | 0.1417 | 0.9736 | 0.9736 |
| 0.1035 | 3.0 | 9540 | 0.1454 | 0.9739 | 0.9739 |
| 0.0921 | 4.0 | 12720 | 0.1399 | 0.9755 | 0.9755 |
| 0.0607 | 5.0 | 15900 | 0.1150 | 0.9792 | 0.9792 |
| 0.0331 | 6.0 | 19080 | 0.1770 | 0.9758 | 0.9758 |
| 0.0289 | 7.0 | 22260 | 0.1782 | 0.9767 | 0.9767 |
| 0.0058 | 8.0 | 25440 | 0.1877 | 0.9796 | 0.9796 |
| 0.008 | 9.0 | 28620 | 0.2034 | 0.9764 | 0.9764 |
| 0.0017 | 10.0 | 31800 | 0.1959 | 0.9783 | 0.9783 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.12.1
|
creynier/wav2vec2-base-swbd-turn-eos-long_short1-8s_utt_removed_4percent | e9e021bd37991766424b6917d69d19742f5a5521 | 2022-06-02T10:20:18.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | creynier | null | creynier/wav2vec2-base-swbd-turn-eos-long_short1-8s_utt_removed_4percent | 4 | null | transformers | 20,010 | hello
|
bbelgodere/codeparrot | 51b05975c1604f2561e5313f26307ffcae541b15 | 2022-06-02T00:34:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | bbelgodere | null | bbelgodere/codeparrot | 4 | null | transformers | 20,011 | Entry not found |
chrisvinsen/wav2vec2-final-1-lm-3 | ea28ad47e2375f19afb004a73a7041937f0c37c0 | 2022-06-02T11:11:11.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/wav2vec2-final-1-lm-3 | 4 | null | transformers | 20,012 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-19
WER 0.283
WER 0.126 with 4-Gram
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6305
- Wer: 0.4499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4816 | 2.74 | 400 | 1.0717 | 0.8927 |
| 0.751 | 5.48 | 800 | 0.7155 | 0.7533 |
| 0.517 | 8.22 | 1200 | 0.7039 | 0.6675 |
| 0.3988 | 10.96 | 1600 | 0.5935 | 0.6149 |
| 0.3179 | 13.7 | 2000 | 0.6477 | 0.5999 |
| 0.2755 | 16.44 | 2400 | 0.5549 | 0.5798 |
| 0.2343 | 19.18 | 2800 | 0.6626 | 0.5798 |
| 0.2103 | 21.92 | 3200 | 0.6488 | 0.5674 |
| 0.1877 | 24.66 | 3600 | 0.5874 | 0.5339 |
| 0.1719 | 27.4 | 4000 | 0.6354 | 0.5389 |
| 0.1603 | 30.14 | 4400 | 0.6612 | 0.5210 |
| 0.1401 | 32.88 | 4800 | 0.6676 | 0.5131 |
| 0.1286 | 35.62 | 5200 | 0.6366 | 0.5075 |
| 0.1159 | 38.36 | 5600 | 0.6064 | 0.4977 |
| 0.1084 | 41.1 | 6000 | 0.6530 | 0.4835 |
| 0.0974 | 43.84 | 6400 | 0.6118 | 0.4853 |
| 0.0879 | 46.58 | 6800 | 0.6316 | 0.4770 |
| 0.0815 | 49.32 | 7200 | 0.6125 | 0.4664 |
| 0.0708 | 52.05 | 7600 | 0.6449 | 0.4683 |
| 0.0651 | 54.79 | 8000 | 0.6068 | 0.4571 |
| 0.0555 | 57.53 | 8400 | 0.6305 | 0.4499 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Fulccrum/distilbert-base-uncased-finetuned-sst2 | d9a2ce86a714451d4ec02f0f998e9aa4bccf13b1 | 2022-06-02T10:28:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Fulccrum | null | Fulccrum/distilbert-base-uncased-finetuned-sst2 | 4 | null | transformers | 20,013 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9128440366972477
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3739
- Accuracy: 0.9128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1885 | 1.0 | 4210 | 0.3092 | 0.9083 |
| 0.1311 | 2.0 | 8420 | 0.3809 | 0.9071 |
| 0.1036 | 3.0 | 12630 | 0.3739 | 0.9128 |
| 0.0629 | 4.0 | 16840 | 0.4623 | 0.9083 |
| 0.036 | 5.0 | 21050 | 0.5198 | 0.9048 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602 | 3ef5cdd8874a2ff54b9747a8c13b15953cd432c0 | 2022-06-02T14:29:58.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | YeRyeongLee | null | YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602 | 4 | null | transformers | 20,014 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: electra-base-discriminator-finetuned-filtered-0602
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-discriminator-finetuned-filtered-0602
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1685
- Accuracy: 0.9720
- F1: 0.9721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.12.1
|
huggingtweets/caballerogaudes | a64dea378bac1e724b7c088609a6226837fd2e38 | 2022-06-02T13:25:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/caballerogaudes | 4 | null | transformers | 20,015 | ---
language: en
thumbnail: http://www.huggingtweets.com/caballerogaudes/1654176335515/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1011998779061559297/5gOeFvds_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">CesarCaballeroGaudes</div>
<div style="text-align: center; font-size: 14px;">@caballerogaudes</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from CesarCaballeroGaudes.
| Data | CesarCaballeroGaudes |
| --- | --- |
| Tweets downloaded | 1724 |
| Retweets | 808 |
| Short tweets | 36 |
| Tweets kept | 880 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2d76b6yf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @caballerogaudes's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/i6nt6oo6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/i6nt6oo6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/caballerogaudes')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AnonymousSub/fpdm_models_scibert_hybrid_epochs_4 | 9f13b2ddbc3baad5d8f470f8600ba750608d311a | 2022-06-02T15:19:05.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/fpdm_models_scibert_hybrid_epochs_4 | 4 | null | transformers | 20,016 | Entry not found |
Zaafir/urdu-asr | 2808c96838bef1c9ba8ae85122e15b97314ab8e6 | 2022-06-02T18:21:06.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Zaafir | null | Zaafir/urdu-asr | 4 | null | transformers | 20,017 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: urdu-asr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# urdu-asr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5640
- Wer: 0.8546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.5377 | 15.98 | 400 | 1.5640 | 0.8546 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 2.2.2
- Tokenizers 0.10.3
|
zoha/wav2vec2-base-common-voice-50p-persian-colab | a01925df7e44b23e48c41839e19df3be45182cbd | 2022-06-24T10:30:56.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | zoha | null | zoha/wav2vec2-base-common-voice-50p-persian-colab | 4 | null | transformers | 20,018 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-common-voice-50p-persian-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-common-voice-50p-persian-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0939
- Wer: 0.6537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0437 | 2.52 | 600 | 3.0170 | 1.0 |
| 2.3667 | 5.04 | 1200 | 2.1575 | 0.9988 |
| 0.9565 | 7.56 | 1800 | 1.0801 | 0.8410 |
| 0.603 | 10.08 | 2400 | 0.9680 | 0.7678 |
| 0.507 | 12.61 | 3000 | 0.9554 | 0.7470 |
| 0.3754 | 15.13 | 3600 | 0.9524 | 0.7157 |
| 0.4267 | 17.65 | 4200 | 0.9290 | 0.6980 |
| 0.3308 | 20.17 | 4800 | 0.9557 | 0.7061 |
| 0.2259 | 22.69 | 5400 | 0.9864 | 0.6830 |
| 0.2486 | 25.21 | 6000 | 1.1086 | 0.6812 |
| 0.1956 | 27.73 | 6600 | 1.0497 | 0.6805 |
| 0.1835 | 30.25 | 7200 | 1.0660 | 0.6596 |
| 0.1926 | 32.77 | 7800 | 1.1274 | 0.6600 |
| 0.2765 | 35.29 | 8400 | 1.0882 | 0.6603 |
| 0.2397 | 37.82 | 9000 | 1.0939 | 0.6537 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Lorenzo1708/TC01_Trabalho01 | b55107bc249424794272ca6c3ce7441dd9ec6f9a | 2022-06-03T00:46:25.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Lorenzo1708 | null | Lorenzo1708/TC01_Trabalho01 | 4 | null | transformers | 20,019 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: TC01_Trabalho01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TC01_Trabalho01
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2714
- Accuracy: 0.8979
- F1: 0.8972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
clhuang/t5-hotel-review-sentiment | 5892ceea906706254ca95d23dcd6b7d07362df25 | 2022-06-07T09:17:03.000Z | [
"pytorch",
"t5",
"text2text-generation",
"tw",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | clhuang | null | clhuang/t5-hotel-review-sentiment | 4 | null | transformers | 20,020 | ---
language:
- tw
tags:
- t5
license: afl-3.0
---
# Hotel review multi-aspect sentiment classification using T5
We fine tune a T5 pretrained model to generate multi-aspect sentiment classes. The outputs are whole sentiment, aspect, and aspect+sentiment.
T5情緒面向分類多任務,依據中文簡體孟子T5預訓練模型微調,訓練資料集只有3萬筆,做NLP研究與課程的範例模型用途。
# 如何測試
在右側測試區輸入不同的任務文字
範例1:
面向::早餐可以吃的饱,但是东西没了,不一定会补
範例2:
面向情绪::房间空调系统有烟味,可考虑做调整
範例3:
整体情绪::位置离逢甲很近
資料集:
資料集蒐集自線上訂房網站的顧客留言10050筆,整理成3項任務,總筆數變成為3倍,共有30150筆(資料由本實驗室成員YYChang蒐集)。
輸入與輸出格式:有三個種類任務分別為:
'整体情绪::'
'面向::',
'面向情绪::'
舉例如下:
整体情绪::因为防疫期间早餐要在房内用餐,但房内电视下的平台有点窄,有点不方便,负面情绪
整体情绪::只是隔音有点不好,负面情绪
整体情绪::订的是豪华家庭房,空间还算大,正面情绪
整体情绪::床大,正面情绪
面向::房间有奇怪的味道,"整洁舒适面向,设施面向"
面向::干净、舒适、亲切,价钱好~,"整洁舒适面向,性价比面向"
面向::位置便利,可以在附近悠闲散步,至市区也不远,又临近大海,住得十分舒服。,"整洁舒适面向,地点面向"
面向情绪::反应无效,服务面向的负面情绪
面向情绪::床其实还蛮好睡,枕头床被还算干净,至少不会让皮肤痒。离火车站市场闹区近。,"整洁舒适面向的正面情绪,设施面向的正面情绪,地点面向的正面情绪"
面向情绪::设备真的太旧了,灯光太暗了。,设施面向的负面情绪
面向情绪::住四天,没人打扫清洁,第一天有盥洗用品,其余就没补充,热水供应不正常,交通尚可。,"整洁舒适面向的负面情绪,设施面向的负面情绪,地点面向的正面情绪"
面向情绪::饭店太过老旧,房内桌子衣橱近乎溃烂,浴室有用过未清的毛巾,排水孔有近半垃圾未清,马桶肮脏,未提供浴巾,莲蓬头只能手持无法挂著墙上使用,空调无法控制,壁纸剥落,走道昏暗,近车站。,"整洁舒适面向的负面情绪,设施面向的负面情绪,地点面向的正面情绪"
預訓練模型:
目前初步先使用"Langboat/mengzi-t5-base"簡體中文預訓練模型加以微調。
由"Langboat/mengzi-t5-base"官網資訊得知是由簡體中文語料所訓練,因此我們將繁體中文留言先轉成簡體中文,再進行微調訓練。
訓練平台: 使用Google colab Tesla T4 GPU進行了3 epochs訓練,費時55分鐘,val_loss約為0.0315,初步實驗,仍有很大的改善空間。
未來改善工作:下一階段會進行數據增強(由於蒐集的語料是不平衡),以及使用Google的mt5繁體簡體中文預訓練模型加以微調,微調語料就可直接使用繁體中文。
使用範例:(輸入繁體中文需先將文字轉為簡體中文,再丟給模型產出輸出文字)
# 載入模型(使用的是simplet5套件)
#pip install simplet5
from simplet5 import SimpleT5
model = SimpleT5()
model.load_model("t5","clhuang/t5-hotel-review-sentiment", use_gpu=False)
# 整體情緒分類任務
text="整体情绪::位置离逢甲很近"
model.predict(text)
#['正面情绪']
# 面向分類任務
text="面向::早餐可以吃的饱,但是东西没了,不一定会补"
model.predict(text)
#['服务面向']
# 面向分類+情绪分類任務
text='面向情绪::房间空调系统有烟味,可考虑做调整'
model.predict(text)
#['设施面向的负面情绪']
# 輸入輸出改成是繁(正)體中文,輸出三項分類任務資訊
from opencc import OpenCC
t2s = OpenCC('t2s') # convert from Traditional Chinese to Simplified Chinese
s2t = OpenCC('s2t') # convert from Simplified Chinese to Traditional Chinese
class_types = ['整体情绪::','面向::','面向情绪::']
def predict(text):
text = t2s.convert(text)
response=[]
for prefix in class_types:
response.append(s2t.convert(model.predict(prefix+text)[0]))
return response
text='位置近市區,人員親切,食物好吃'
predict(text)
#['正面情緒', '服務面向,地點面向', '服務面向的正面情緒,地點面向的正面情緒']
|
NorrisPau/my-finetuned-bert | 0ea11901f08fe59388287577fa7a22847040c517 | 2022-06-03T16:59:09.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | NorrisPau | null | NorrisPau/my-finetuned-bert | 4 | null | transformers | 20,021 | Entry not found |
VictorZhu/results | 011abb0b63d4535f80db189d67b54d015c8be547 | 2022-06-03T17:17:57.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | VictorZhu | null | VictorZhu/results | 4 | null | transformers | 20,022 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1428 | 1.0 | 510 | 0.1347 |
| 0.0985 | 2.0 | 1020 | 0.1189 |
| 0.0763 | 3.0 | 1530 | 0.1172 |
| 0.0646 | 4.0 | 2040 | 0.1194 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
juancavallotti/t5-grammar-corruption | a3b74b9863dd51e5cb19f8b547363126971b2b67 | 2022-06-05T00:08:55.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | juancavallotti | null | juancavallotti/t5-grammar-corruption | 4 | null | transformers | 20,023 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-grammar-corruption
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-grammar-corruption
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Jeevesh8/lecun_feather_berts-1 | 1e219793772f71bf8648913119440519cc57c036 | 2022-06-04T06:44:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-1 | 4 | null | transformers | 20,024 | Entry not found |
Jeevesh8/lecun_feather_berts-0 | 5a6ae9c7331ddfd222f501d9fb640abc84ab4a55 | 2022-06-04T06:44:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-0 | 4 | null | transformers | 20,025 | Entry not found |
Jeevesh8/lecun_feather_berts-66 | 1e0e6ec2f83ef98496037556143f756891b3eb79 | 2022-06-04T06:50:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-66 | 4 | null | transformers | 20,026 | Entry not found |
Jeevesh8/lecun_feather_berts-64 | 2581ea852f2673577a0c494116397d66418b3cce | 2022-06-04T06:52:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-64 | 4 | null | transformers | 20,027 | Entry not found |
Jeevesh8/lecun_feather_berts-45 | f20899576837998ccf4fdbfe3708ffc5ff601829 | 2022-06-04T06:50:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-45 | 4 | null | transformers | 20,028 | Entry not found |
Jeevesh8/lecun_feather_berts-42 | 8da1ce22470129be5a4bc3be979d769497a56038 | 2022-06-04T06:51:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-42 | 4 | null | transformers | 20,029 | Entry not found |
Jeevesh8/lecun_feather_berts-51 | e5cbbb0cd566b185c9c008d5ff4545f67e369418 | 2022-06-04T06:50:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-51 | 4 | null | transformers | 20,030 | Entry not found |
Jeevesh8/lecun_feather_berts-44 | ddc725aa0f2335fa35e9f616b5a3587320b28bc3 | 2022-06-04T06:50:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-44 | 4 | null | transformers | 20,031 | Entry not found |
Jeevesh8/lecun_feather_berts-40 | e2eb73a54c8c72764d5cc83dab3f6fdcb57c117e | 2022-06-04T06:50:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-40 | 4 | null | transformers | 20,032 | Entry not found |
Jeevesh8/lecun_feather_berts-54 | b87ff72b2714c84870fbca1157337498bbce5e07 | 2022-06-04T06:50:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-54 | 4 | null | transformers | 20,033 | Entry not found |
Jeevesh8/lecun_feather_berts-46 | b4fbc0e3c7b6fe929ff0ed8202bdf352265b8eb7 | 2022-06-04T06:50:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-46 | 4 | null | transformers | 20,034 | Entry not found |
Jeevesh8/lecun_feather_berts-36 | c3abe51606499cb1ae938f50ac4d520f7a80b96f | 2022-06-04T06:50:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-36 | 4 | null | transformers | 20,035 | Entry not found |
Jeevesh8/lecun_feather_berts-35 | ec73a7e88f0a51e470514f4a7f087833f95aaf89 | 2022-06-04T06:50:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-35 | 4 | null | transformers | 20,036 | Entry not found |
Jeevesh8/lecun_feather_berts-48 | aca369fc1dc6708ad0bf8dbe7d08e2709e3c3d52 | 2022-06-04T06:50:52.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-48 | 4 | null | transformers | 20,037 | Entry not found |
Jeevesh8/lecun_feather_berts-70 | 9f9d7f5bc2857ab276d961f523f995ce63bca90b | 2022-06-04T06:50:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-70 | 4 | null | transformers | 20,038 | Entry not found |
Jeevesh8/lecun_feather_berts-65 | 82217de3249515ccbb3258c0a8595fd8511f11ef | 2022-06-04T06:50:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-65 | 4 | null | transformers | 20,039 | Entry not found |
Jeevesh8/lecun_feather_berts-53 | dfa48df77c6024a9522692a860b1c4650b68e691 | 2022-06-04T06:50:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-53 | 4 | null | transformers | 20,040 | Entry not found |
Jeevesh8/lecun_feather_berts-37 | 47deb889eeb6181a871d560a7eaad1d449fe9bdc | 2022-06-04T06:50:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-37 | 4 | null | transformers | 20,041 | Entry not found |
Jeevesh8/lecun_feather_berts-49 | ae2b7e0379edba7ec28eb303bd63920ff8281439 | 2022-06-04T06:50:52.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-49 | 4 | null | transformers | 20,042 | Entry not found |
Jeevesh8/lecun_feather_berts-71 | cbf8d6baf5bdb0f386803bd68993bdcd4a2b0afb | 2022-06-04T06:50:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-71 | 4 | null | transformers | 20,043 | Entry not found |
Jeevesh8/lecun_feather_berts-52 | e2d8326e7e05324692f82285263363e138fad8cf | 2022-06-04T06:50:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-52 | 4 | null | transformers | 20,044 | Entry not found |
Jeevesh8/lecun_feather_berts-38 | 7fa1d8488ffb01fc867505505aded66930911ab7 | 2022-06-04T06:50:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-38 | 4 | null | transformers | 20,045 | Entry not found |
Jeevesh8/lecun_feather_berts-43 | 48750428ecf3042944b4f65705ecd783ff0befc2 | 2022-06-04T06:53:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-43 | 4 | null | transformers | 20,046 | Entry not found |
Jeevesh8/lecun_feather_berts-41 | 2d8e194c925bd3f00f24f3ff479894e127e23f38 | 2022-06-04T06:51:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-41 | 4 | null | transformers | 20,047 | Entry not found |
Jeevesh8/lecun_feather_berts-47 | 92356704ef24fa94b3b837160e29c916424feeab | 2022-06-04T06:50:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-47 | 4 | null | transformers | 20,048 | Entry not found |
Jeevesh8/lecun_feather_berts-39 | d5975fa1ebe10f00c96b43109e51979548f603aa | 2022-06-04T06:50:52.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-39 | 4 | null | transformers | 20,049 | Entry not found |
Jeevesh8/lecun_feather_berts-72 | 3cc0751db6e89cbe66daf9884998ff52354db83b | 2022-06-04T06:53:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-72 | 4 | null | transformers | 20,050 | Entry not found |
Jeevesh8/lecun_feather_berts-50 | d9b35edaa429c73968e20a2ca5db842948af129a | 2022-06-04T06:51:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-50 | 4 | null | transformers | 20,051 | Entry not found |
Jeevesh8/lecun_feather_berts-56 | 8998a522e45a6833f4721c795470a1af739c3b7b | 2022-06-04T06:50:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-56 | 4 | null | transformers | 20,052 | Entry not found |
Jeevesh8/lecun_feather_berts-61 | 56cb651d769b6718e4c75c904d3c739f8beb378b | 2022-06-04T06:50:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-61 | 4 | null | transformers | 20,053 | Entry not found |
Jeevesh8/lecun_feather_berts-58 | fd4957fa22bdaed3630178110a261f4b4eb7f348 | 2022-06-04T06:50:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-58 | 4 | null | transformers | 20,054 | Entry not found |
Jeevesh8/lecun_feather_berts-62 | 8ac8bea3aa244cef6d5aa66dc685e9c831449941 | 2022-06-04T06:53:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-62 | 4 | null | transformers | 20,055 | Entry not found |
Jeevesh8/lecun_feather_berts-30 | d25bec7b687e79df52e5a20622a86cc3961d9360 | 2022-06-04T06:51:46.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-30 | 4 | null | transformers | 20,056 | Entry not found |
Jeevesh8/lecun_feather_berts-22 | 7509afb9727099e34bd6244277ae329ac5479440 | 2022-06-04T06:51:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-22 | 4 | null | transformers | 20,057 | Entry not found |
Jeevesh8/lecun_feather_berts-26 | faed0d218759edbfb9d8691f7402affce98abb8e | 2022-06-04T06:51:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-26 | 4 | null | transformers | 20,058 | Entry not found |
Jeevesh8/lecun_feather_berts-24 | abccea36ba9e4cbfc1f0743f580ee064722dcadd | 2022-06-04T06:51:44.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-24 | 4 | null | transformers | 20,059 | Entry not found |
Jeevesh8/lecun_feather_berts-29 | 4c6144d5f7b2b768ef3b999a168676b89950ce44 | 2022-06-04T06:51:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-29 | 4 | null | transformers | 20,060 | Entry not found |
Jeevesh8/lecun_feather_berts-33 | 53a59aa1a286d0bd84c429cd3d347f8f996d88cf | 2022-06-04T06:51:13.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-33 | 4 | null | transformers | 20,061 | Entry not found |
Jeevesh8/lecun_feather_berts-27 | a7aaa51fea5ef2089922fda8914ffbc46e837001 | 2022-06-04T06:51:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-27 | 4 | null | transformers | 20,062 | Entry not found |
Jeevesh8/lecun_feather_berts-31 | 2db1f557d2e5286e3da1c52033cd7635b3ace297 | 2022-06-04T06:51:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-31 | 4 | null | transformers | 20,063 | Entry not found |
Jeevesh8/lecun_feather_berts-67 | d60131bd43e913a3b389588cb30f6ac31803d7ab | 2022-06-04T06:50:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-67 | 4 | null | transformers | 20,064 | Entry not found |
Jeevesh8/lecun_feather_berts-68 | a7a8465258375cbab5feee61ea3cbf53c729772d | 2022-06-04T06:50:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-68 | 4 | null | transformers | 20,065 | Entry not found |
Jeevesh8/lecun_feather_berts-20 | dcedb1bd2cd6b81c03651576011bf729a110d85a | 2022-06-04T06:51:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-20 | 4 | null | transformers | 20,066 | Entry not found |
Jeevesh8/lecun_feather_berts-19 | 32ba7b85a731a52995b85899b1e52eeb19bbedf2 | 2022-06-04T06:52:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-19 | 4 | null | transformers | 20,067 | Entry not found |
Jeevesh8/lecun_feather_berts-18 | 8fa87c086ddab584c406e722a040d4a61074659e | 2022-06-04T06:52:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-18 | 4 | null | transformers | 20,068 | Entry not found |
Jeevesh8/lecun_feather_berts-11 | 60aed57782fda88d279cc591d886a4f3e1c0d4e8 | 2022-06-04T06:52:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-11 | 4 | null | transformers | 20,069 | Entry not found |
Jeevesh8/lecun_feather_berts-2 | b400a59459b7aee7909b9a1f5c2413d1df3e1703 | 2022-06-04T06:52:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-2 | 4 | null | transformers | 20,070 | Entry not found |
Jeevesh8/lecun_feather_berts-10 | 8d83b0e51765ac9779637c4d7d2868e88cfd4494 | 2022-06-04T06:52:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-10 | 4 | null | transformers | 20,071 | Entry not found |
Jeevesh8/lecun_feather_berts-3 | 3d7fc23796c0fbbbc87e71bd8576442f7d267768 | 2022-06-04T06:52:16.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-3 | 4 | null | transformers | 20,072 | Entry not found |
Jeevesh8/lecun_feather_berts-14 | ce0309bfd371c88c84e7e0f35bcb65c814a2b94f | 2022-06-04T06:52:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-14 | 4 | null | transformers | 20,073 | Entry not found |
Jeevesh8/lecun_feather_berts-5 | a4dd1c53ac33687bb1508e965d423532195324d0 | 2022-06-04T06:52:19.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-5 | 4 | null | transformers | 20,074 | Entry not found |
Jeevesh8/lecun_feather_berts-4 | d63981ad857b7940906aac7bdd8e2bb40c73673a | 2022-06-04T06:52:17.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-4 | 4 | null | transformers | 20,075 | Entry not found |
Jeevesh8/lecun_feather_berts-8 | 2f63a88b39f55c7bc55c2ad546cf86d8958512b6 | 2022-06-04T06:52:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-8 | 4 | null | transformers | 20,076 | Entry not found |
Jeevesh8/lecun_feather_berts-93 | 76189737f2e22936a6e1ca4e81c641dcf598a3ca | 2022-06-04T06:50:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-93 | 4 | null | transformers | 20,077 | Entry not found |
Jeevesh8/lecun_feather_berts-92 | 30da68d868c0878808ea6575a471e0bd4341845d | 2022-06-04T06:50:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-92 | 4 | null | transformers | 20,078 | Entry not found |
Jeevesh8/lecun_feather_berts-91 | df6a7b0d330289ce9d95cc48905aa363ab41014a | 2022-06-04T06:51:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-91 | 4 | null | transformers | 20,079 | Entry not found |
Jeevesh8/lecun_feather_berts-90 | dfcfb29ec58b01d1e68d9a0a2949a1ae93321e46 | 2022-06-04T06:50:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-90 | 4 | null | transformers | 20,080 | Entry not found |
Jeevesh8/lecun_feather_berts-84 | e9be863cfe3179733ffbf4771fd9d4b0e391362e | 2022-06-04T06:51:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-84 | 4 | null | transformers | 20,081 | Entry not found |
Jeevesh8/lecun_feather_berts-85 | bd9ad60bef88594428cf7b764e57911660a025fb | 2022-06-04T06:51:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-85 | 4 | null | transformers | 20,082 | Entry not found |
Jeevesh8/lecun_feather_berts-88 | 7d4302feb02b1166948568ab5a0840644ee946e1 | 2022-06-04T06:51:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-88 | 4 | null | transformers | 20,083 | Entry not found |
Jeevesh8/lecun_feather_berts-75 | aa273e04b72f19ec6b959077409a6c7a491f0130 | 2022-06-04T06:51:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-75 | 4 | null | transformers | 20,084 | Entry not found |
Jeevesh8/lecun_feather_berts-89 | 08da7588c05d95da35fe6d3d7339ac1b8558c9a0 | 2022-06-04T06:51:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-89 | 4 | null | transformers | 20,085 | Entry not found |
Jeevesh8/lecun_feather_berts-86 | 880773762b414e194ef6aff265e6771a24a2aea0 | 2022-06-04T06:50:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-86 | 4 | null | transformers | 20,086 | Entry not found |
Jeevesh8/lecun_feather_berts-80 | f864ccb95486e4104bb6ee150fbed5af4fcc075a | 2022-06-04T06:51:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-80 | 4 | null | transformers | 20,087 | Entry not found |
Jeevesh8/lecun_feather_berts-82 | d2a38c080dd9665fa6f7cf7fc049da10308da8b7 | 2022-06-04T06:51:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-82 | 4 | null | transformers | 20,088 | Entry not found |
Jeevesh8/lecun_feather_berts-81 | 1e81abb6b8ac9055c5bc07d7d852b361348232bd | 2022-06-04T06:51:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-81 | 4 | null | transformers | 20,089 | Entry not found |
Jeevesh8/lecun_feather_berts-83 | 78fbd3f4ed2ce5cc74aa82f3d65699f41462fa38 | 2022-06-04T06:51:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-83 | 4 | null | transformers | 20,090 | Entry not found |
Jeevesh8/lecun_feather_berts-77 | 74c16ee260e4bbfe03dc43a19ff4c17a5bd09d07 | 2022-06-04T06:51:07.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-77 | 4 | null | transformers | 20,091 | Entry not found |
Jeevesh8/lecun_feather_berts-76 | 5580cc9dda7ba5fd90445c07e04ef0cb66a11fea | 2022-06-04T06:51:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-76 | 4 | null | transformers | 20,092 | Entry not found |
Jeevesh8/lecun_feather_berts-79 | 685cf4896cd7b108811fd5582ff7e0223418b19a | 2022-06-04T06:51:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-79 | 4 | null | transformers | 20,093 | Entry not found |
Jeevesh8/lecun_feather_berts-78 | 099b8542949d37f1ab37314e2d7145ba312fbbb4 | 2022-06-04T06:51:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-78 | 4 | null | transformers | 20,094 | Entry not found |
Jeevesh8/lecun_feather_berts-98 | a425c04238e1415f9364bc3e2a9167f0e6a95478 | 2022-06-04T06:53:29.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-98 | 4 | null | transformers | 20,095 | Entry not found |
Jeevesh8/lecun_feather_berts-97 | a2f9c3b6d5a86745a669d82843345891a88b722e | 2022-06-04T06:53:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-97 | 4 | null | transformers | 20,096 | Entry not found |
Jeevesh8/lecun_feather_berts-99 | 7e15b2d8bacc6fc179b3f5577441e49d5122d8f9 | 2022-06-04T06:53:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-99 | 4 | null | transformers | 20,097 | Entry not found |
Jeevesh8/lecun_feather_berts-96 | 7d5809840329f98ca03b09a6f406e7c1d30ba51d | 2022-06-04T06:53:30.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-96 | 4 | null | transformers | 20,098 | Entry not found |
Jeevesh8/lecun_feather_berts-95 | 38a5f89b6d08d90363163ad9b80fbb8d6e39d136 | 2022-06-04T06:53:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-95 | 4 | null | transformers | 20,099 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.