modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Jeevesh8/std_pnt_04_feather_berts-0 | 2d8e99398331cc3fa50d4e647b5c5d8d7ac421ee | 2022-06-12T06:03:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-0 | 4 | null | transformers | 20,200 | Entry not found |
Jeevesh8/std_pnt_04_feather_berts-59 | 48510f06a64876cb61f69280a426419b76d99486 | 2022-06-12T06:03:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-59 | 4 | null | transformers | 20,201 | Entry not found |
Jeevesh8/std_pnt_04_feather_berts-3 | 4ccce9ae3c36341246e162444578fad9537f0771 | 2022-06-12T06:03:30.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-3 | 4 | null | transformers | 20,202 | Entry not found |
Jeevesh8/std_pnt_04_feather_berts-1 | cd5ed0ce15d6a8cc1ae6a9ba076f4bb68e35f910 | 2022-06-12T06:03:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-1 | 4 | null | transformers | 20,203 | Entry not found |
Jeevesh8/std_pnt_04_feather_berts-2 | b1da8814de7cb8a7eafd318df74d18c08af19459 | 2022-06-12T06:05:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-2 | 4 | null | transformers | 20,204 | Entry not found |
Jeevesh8/std_pnt_04_feather_berts-12 | 41e85a5bd47e05af10bcc7b46f5c983476e85dc6 | 2022-06-12T06:05:22.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-12 | 4 | null | transformers | 20,205 | Entry not found |
Jeevesh8/std_pnt_04_feather_berts-87 | c39f79f72c3e00ce5242228932f4194ec4f9e8ee | 2022-06-12T06:03:13.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-87 | 4 | null | transformers | 20,206 | Entry not found |
Jeevesh8/std_pnt_04_feather_berts-31 | c549e70cc14ff60fbbb7397e9bd44ff6452a5cab | 2022-06-12T06:03:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-31 | 4 | null | transformers | 20,207 | Entry not found |
Jeevesh8/std_pnt_04_feather_berts-6 | 03e15f7894044cff57fb5a3d8f675b2c2cd8fb90 | 2022-06-12T06:03:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-6 | 4 | null | transformers | 20,208 | Entry not found |
Jeevesh8/std_pnt_04_feather_berts-88 | c4135ce6938af05ff9b8dddf0c25463f21fea187 | 2022-06-12T06:03:14.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-88 | 4 | null | transformers | 20,209 | Entry not found |
Jeevesh8/std_pnt_04_feather_berts-8 | 1dc3e92d8808b9ff17835bf4ab89ec09095b0a89 | 2022-06-12T06:03:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-8 | 4 | null | transformers | 20,210 | Entry not found |
Jeevesh8/std_pnt_04_feather_berts-4 | 33b7db1629a3d959e2b74254072ccdc162ef9fc7 | 2022-06-12T06:03:52.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-4 | 4 | null | transformers | 20,211 | Entry not found |
Jeevesh8/std_pnt_04_feather_berts-7 | 3444caae306d23344a89090017ae602613042f12 | 2022-06-12T06:05:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-7 | 4 | null | transformers | 20,212 | Entry not found |
Jeevesh8/std_pnt_04_feather_berts-5 | 34a36bf6ce1567f3671578118d61ccd277173c31 | 2022-06-12T06:06:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-5 | 4 | null | transformers | 20,213 | Entry not found |
Jeevesh8/std_pnt_04_feather_berts-96 | fc3d0efbc821a02016cecda13629378a26803921 | 2022-06-12T06:05:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-96 | 4 | null | transformers | 20,214 | Entry not found |
Jeevesh8/std_pnt_04_feather_berts-99 | a00946afea9b1d39edabb65a7fc0f33f05491516 | 2022-06-12T06:05:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-99 | 4 | null | transformers | 20,215 | Entry not found |
Jeevesh8/std_pnt_04_feather_berts-94 | 0b4fdbb3989f6c29bb4fd7989d91be6262485f92 | 2022-06-12T06:06:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-94 | 4 | null | transformers | 20,216 | Entry not found |
Jeevesh8/std_pnt_04_feather_berts-93 | 10d2936266a8b93a024a509ada621867a3a74bc3 | 2022-06-12T06:06:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-93 | 4 | null | transformers | 20,217 | Entry not found |
Jeevesh8/std_pnt_04_feather_berts-97 | 4b88409e8780db8003af54d4e3ac01afad00e665 | 2022-06-12T06:05:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-97 | 4 | null | transformers | 20,218 | Entry not found |
Jeevesh8/std_pnt_04_feather_berts-95 | c9338dc17d4279255549f0af00908de573d8d0ce | 2022-06-12T06:06:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-95 | 4 | null | transformers | 20,219 | Entry not found |
kravchenko/uk-mt5-small | c6e5202f2d489ce2603b51dbe58fb7f1b9f1e332 | 2022-06-12T14:56:53.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"uk",
"en",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | kravchenko | null | kravchenko/uk-mt5-small | 4 | null | transformers | 20,220 | ---
language:
- uk
- en
tags:
- mt5
---
The aim is to compress the mT5-small model to leave only the Ukrainian language and some basic English.
Reproduced the similar result (but with another language) from [this](https://towardsdatascience.com/how-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90) medium article.
Results:
- 300M params -> 75M params (75%)
- 250K tokens -> 8900 tokens
- 1.1GB size model -> 0.3GB size model |
nlokam99/ada_sample_3 | 640e74100fefdae6466e69e785cf762845dd48e6 | 2022-06-12T17:43:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | nlokam99 | null | nlokam99/ada_sample_3 | 4 | null | transformers | 20,221 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
--- |
course5i/SEAD-L-6_H-384_A-12-rte | 53947eed4f2df58b74feb189da7af85ec8cba2c9 | 2022-06-12T21:06:01.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"en",
"dataset:glue",
"dataset:rte",
"arxiv:1910.01108",
"arxiv:1909.10351",
"arxiv:2002.10957",
"arxiv:1810.04805",
"arxiv:1804.07461",
"arxiv:1905.00537",
"transformers",
"SEAD",
"license:apache-2.0"
] | text-classification | false | course5i | null | course5i/SEAD-L-6_H-384_A-12-rte | 4 | null | transformers | 20,222 | ---
language:
- en
license: apache-2.0
tags:
- SEAD
datasets:
- glue
- rte
---
## Paper
## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63)
Aurthors: *Moyan Mei*, *Rohit Sroch*
## Abstract
With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks.
*Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63).
Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).*
## SEAD-L-6_H-384_A-12-rte
This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **rte** task. For weights initialization, we used [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased)
## All SEAD Checkpoints
Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD)
## Intended uses & limitations
More information needed
### Training hyperparameters
Please take a look at the `training_args.bin` file
```python
$ import torch
$ hyperparameters = torch.load(os.path.join('training_args.bin'))
```
### Evaluation results
| eval_accuracy | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples |
|:-------------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:|
| 0.8231 | 1.7325 | 159.884 | 5.195 | 0.6150 | 277 |
### Framework versions
- Transformers >=4.8.0
- Pytorch >=1.6.0
- TensorFlow >=2.5.0
- Flax >=0.3.5
- Datasets >=1.10.2
- Tokenizers >=0.11.6
If you use these models, please cite the following paper:
```
@article{article,
author={Mei, Moyan and Sroch, Rohit},
title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding},
volume={3},
number={1},
journal={Lattice, The Machine Learning Journal by Association of Data Scientists},
day={26},
year={2022},
month={Feb},
url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63}
}
```
|
course5i/SEAD-L-6_H-384_A-12-stsb | fa713818553d7cde2eb3008481426124fd787f32 | 2022-06-12T21:15:54.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"en",
"dataset:glue",
"dataset:stsb",
"arxiv:1910.01108",
"arxiv:1909.10351",
"arxiv:2002.10957",
"arxiv:1810.04805",
"arxiv:1804.07461",
"arxiv:1905.00537",
"transformers",
"SEAD",
"license:apache-2.0"
] | text-classification | false | course5i | null | course5i/SEAD-L-6_H-384_A-12-stsb | 4 | null | transformers | 20,223 | ---
language:
- en
license: apache-2.0
tags:
- SEAD
datasets:
- glue
- stsb
---
## Paper
## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63)
Aurthors: *Moyan Mei*, *Rohit Sroch*
## Abstract
With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks.
*Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63).
Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).*
## SEAD-L-6_H-384_A-12-stsb
This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **stsb** task. For weights initialization, we used [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased)
## All SEAD Checkpoints
Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD)
## Intended uses & limitations
More information needed
### Training hyperparameters
Please take a look at the `training_args.bin` file
```python
$ import torch
$ hyperparameters = torch.load(os.path.join('training_args.bin'))
```
### Evaluation results
| eval_pearson | eval_spearmanr | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples |
|:------------:|:--------------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:|
| 0.9058 | 0.9032 | 2.0911 | 717.342 | 22.477 | 0.5057 | 1500 |
### Framework versions
- Transformers >=4.8.0
- Pytorch >=1.6.0
- TensorFlow >=2.5.0
- Flax >=0.3.5
- Datasets >=1.10.2
- Tokenizers >=0.11.6
If you use these models, please cite the following paper:
```
@article{article,
author={Mei, Moyan and Sroch, Rohit},
title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding},
volume={3},
number={1},
journal={Lattice, The Machine Learning Journal by Association of Data Scientists},
day={26},
year={2022},
month={Feb},
url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63}
}
```
|
course5i/SEAD-L-6_H-384_A-12-qnli | f7b54a3bb5d8c21d49b45511b4aa7b5f4bf5c0a7 | 2022-06-12T21:34:41.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"en",
"dataset:glue",
"dataset:qnli",
"arxiv:1910.01108",
"arxiv:1909.10351",
"arxiv:2002.10957",
"arxiv:1810.04805",
"arxiv:1804.07461",
"arxiv:1905.00537",
"transformers",
"SEAD",
"license:apache-2.0"
] | text-classification | false | course5i | null | course5i/SEAD-L-6_H-384_A-12-qnli | 4 | null | transformers | 20,224 | ---
language:
- en
license: apache-2.0
tags:
- SEAD
datasets:
- glue
- qnli
---
## Paper
## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63)
Aurthors: *Moyan Mei*, *Rohit Sroch*
## Abstract
With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks.
*Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63).
Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).*
## SEAD-L-6_H-384_A-12-qnli
This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **qnli** task. For weights initialization, we used [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased)
## All SEAD Checkpoints
Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD)
## Intended uses & limitations
More information needed
### Training hyperparameters
Please take a look at the `training_args.bin` file
```python
$ import torch
$ hyperparameters = torch.load(os.path.join('training_args.bin'))
```
### Evaluation results
| eval_accuracy | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples |
|:-------------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:|
| 0.9098 | 3.9867 | 1370.297 | 42.892 | 0.2570 | 5463 |
### Framework versions
- Transformers >=4.8.0
- Pytorch >=1.6.0
- TensorFlow >=2.5.0
- Flax >=0.3.5
- Datasets >=1.10.2
- Tokenizers >=0.11.6
If you use these models, please cite the following paper:
```
@article{article,
author={Mei, Moyan and Sroch, Rohit},
title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding},
volume={3},
number={1},
journal={Lattice, The Machine Learning Journal by Association of Data Scientists},
day={26},
year={2022},
month={Feb},
url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63}
}
```
|
course5i/SEAD-L-6_H-384_A-12-qqp | b4453a323880198bba10ca9707fdc066e034e461 | 2022-06-12T22:24:04.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"en",
"dataset:glue",
"dataset:qqp",
"arxiv:1910.01108",
"arxiv:1909.10351",
"arxiv:2002.10957",
"arxiv:1810.04805",
"arxiv:1804.07461",
"arxiv:1905.00537",
"transformers",
"SEAD",
"license:apache-2.0"
] | text-classification | false | course5i | null | course5i/SEAD-L-6_H-384_A-12-qqp | 4 | null | transformers | 20,225 | ---
language:
- en
license: apache-2.0
tags:
- SEAD
datasets:
- glue
- qqp
---
## Paper
## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63)
Aurthors: *Moyan Mei*, *Rohit Sroch*
## Abstract
With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks.
*Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63).
Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).*
## SEAD-L-6_H-384_A-12-qqp
This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **qqp** task. For weights initialization, we used [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased)
## All SEAD Checkpoints
Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD)
## Intended uses & limitations
More information needed
### Training hyperparameters
Please take a look at the `training_args.bin` file
```python
$ import torch
$ hyperparameters = torch.load(os.path.join('training_args.bin'))
```
### Evaluation results
| eval_accuracy | eval_f1 | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples |
|:-------------:|:-------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:|
| 0.9126 | 0.8822 | 23.0122 | 1756.896 | 54.927 | 0.3389 | 40430 |
### Framework versions
- Transformers >=4.8.0
- Pytorch >=1.6.0
- TensorFlow >=2.5.0
- Flax >=0.3.5
- Datasets >=1.10.2
- Tokenizers >=0.11.6
If you use these models, please cite the following paper:
```
@article{article,
author={Mei, Moyan and Sroch, Rohit},
title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding},
volume={3},
number={1},
journal={Lattice, The Machine Learning Journal by Association of Data Scientists},
day={26},
year={2022},
month={Feb},
url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63}
}
```
|
enoriega/kw_pubmed_keyword_sentence_10000_0.0003 | 440a23ea674111997b7a9c6e028cb99cfae2b8da | 2022-06-13T10:43:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | enoriega | null | enoriega/kw_pubmed_keyword_sentence_10000_0.0003 | 4 | null | transformers | 20,226 | Entry not found |
QuentinKemperino/ECHR_test_Merged | 9bcdf48251d8df18edf7f2a68056321911da8a98 | 2022-06-13T19:29:46.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:lex_glue",
"transformers",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"model-index"
] | text-classification | false | QuentinKemperino | null | QuentinKemperino/ECHR_test_Merged | 4 | null | transformers | 20,227 | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- lex_glue
model-index:
- name: ECHR_test_Merged
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ECHR_test_Merged
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the lex_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2162
- Macro-f1: 0.5607
- Micro-f1: 0.6726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro-f1 | Micro-f1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.2278 | 0.44 | 500 | 0.3196 | 0.2394 | 0.4569 |
| 0.1891 | 0.89 | 1000 | 0.2827 | 0.3255 | 0.5112 |
| 0.1803 | 1.33 | 1500 | 0.2603 | 0.3961 | 0.5698 |
| 0.1676 | 1.78 | 2000 | 0.2590 | 0.4251 | 0.6003 |
| 0.1635 | 2.22 | 2500 | 0.2489 | 0.4186 | 0.6030 |
| 0.1784 | 2.67 | 3000 | 0.2445 | 0.4627 | 0.6159 |
| 0.1556 | 3.11 | 3500 | 0.2398 | 0.4757 | 0.6170 |
| 0.151 | 3.56 | 4000 | 0.2489 | 0.4725 | 0.6163 |
| 0.1564 | 4.0 | 4500 | 0.2289 | 0.5019 | 0.6416 |
| 0.1544 | 4.44 | 5000 | 0.2406 | 0.5013 | 0.6408 |
| 0.1516 | 4.89 | 5500 | 0.2351 | 0.5145 | 0.6510 |
| 0.1487 | 5.33 | 6000 | 0.2354 | 0.5164 | 0.6394 |
| 0.1385 | 5.78 | 6500 | 0.2385 | 0.5205 | 0.6486 |
| 0.145 | 6.22 | 7000 | 0.2337 | 0.5197 | 0.6529 |
| 0.1332 | 6.67 | 7500 | 0.2294 | 0.5421 | 0.6526 |
| 0.1293 | 7.11 | 8000 | 0.2167 | 0.5576 | 0.6652 |
| 0.1475 | 7.56 | 8500 | 0.2218 | 0.5676 | 0.6649 |
| 0.1376 | 8.0 | 9000 | 0.2203 | 0.5565 | 0.6709 |
| 0.1408 | 8.44 | 9500 | 0.2178 | 0.5541 | 0.6716 |
| 0.133 | 8.89 | 10000 | 0.2212 | 0.5692 | 0.6640 |
| 0.1363 | 9.33 | 10500 | 0.2148 | 0.5642 | 0.6736 |
| 0.1344 | 9.78 | 11000 | 0.2162 | 0.5607 | 0.6726 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
microsoft/swinv2-tiny-patch4-window16-256 | a7cfb0684bc557bf524cc5cfa1bba1e661a22ab5 | 2022-07-08T12:53:17.000Z | [
"pytorch",
"swinv2",
"transformers"
] | null | false | microsoft | null | microsoft/swinv2-tiny-patch4-window16-256 | 4 | null | transformers | 20,228 | Entry not found |
Alireza1044/mobilebert_mrpc | 0327be564e514145549324189dbeb380ee1fded3 | 2022-06-14T08:16:32.000Z | [
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Alireza1044 | null | Alireza1044/mobilebert_mrpc | 4 | null | transformers | 20,229 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8382352941176471
- name: F1
type: f1
value: 0.8888888888888888
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mrpc
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3672
- Accuracy: 0.8382
- F1: 0.8889
- Combined Score: 0.8636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Alireza1044/mobilebert_mnli | 17d50a149c9e472c9ae6e79005ef0ea0f81c8f7e | 2022-06-14T11:22:34.000Z | [
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Alireza1044 | null | Alireza1044/mobilebert_mnli | 4 | null | transformers | 20,230 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8230268510984541
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mnli
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4595
- Accuracy: 0.8230
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 48
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.3
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Alireza1044/mobilebert_qqp | 29fd236aa5fc2dbd115d9eb6226f9556719bbd05 | 2022-06-14T14:57:04.000Z | [
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Alireza1044 | null | Alireza1044/mobilebert_qqp | 4 | null | transformers | 20,231 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8988869651249073
- name: F1
type: f1
value: 0.8670050100852366
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qqp
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2458
- Accuracy: 0.8989
- F1: 0.8670
- Combined Score: 0.8829
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.5
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Alireza1044/mobilebert_QNLI | e168cb085114a21c905f9399ef3e56070b2cafba | 2022-06-14T19:54:02.000Z | [
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Alireza1044 | null | Alireza1044/mobilebert_QNLI | 4 | null | transformers | 20,232 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.9068277503203368
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qnli
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3731
- Accuracy: 0.9068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
voidful/phoneme-mt5 | 20ec7f09c21278cebef185419f9dfbacec7e17a9 | 2022-06-14T17:02:49.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | voidful | null | voidful/phoneme-mt5 | 4 | null | transformers | 20,233 | Entry not found |
totoro4007/cryptodeberta-base-all-finetuned | de08541cefb506b62199f1fb009baec865481fef | 2022-06-15T03:48:00.000Z | [
"pytorch",
"deberta",
"text-classification",
"transformers"
] | text-classification | false | totoro4007 | null | totoro4007/cryptodeberta-base-all-finetuned | 4 | null | transformers | 20,234 | Entry not found |
mesolitica/pretrained-wav2vec2-small-mixed | b073daacdb00890ccf1848de66d7f4deaa6b9c62 | 2022-06-15T14:55:28.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"pretraining",
"transformers",
"generated_from_keras_callback",
"model-index"
] | null | false | mesolitica | null | mesolitica/pretrained-wav2vec2-small-mixed | 4 | null | transformers | 20,235 | ---
tags:
- generated_from_keras_callback
model-index:
- name: pretrained-wav2vec2-base-mixed
results: []
---
# pretrained-wav2vec2-small-mixed
Pretrained Wav2Vec2 SMALL size on https://github.com/huseinzol05/malaya-speech/tree/master/data/mixed-stt, also included Tensorboard files in this repository.
This model was pretrained on 3 languages,
1. Malay
2. Singlish
3. Mandarin
**This model trained on a single RTX 3090 Ti 24GB VRAM, provided by https://mesolitica.com/**. |
Hermite/DialoGPT-large-hermite2 | ff461379d8a5770aa336a2916ed150b65a6409ec | 2022-06-15T11:26:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Hermite | null | Hermite/DialoGPT-large-hermite2 | 4 | null | transformers | 20,236 | ---
tags:
conversational
---
#Hermite DialoGPT Model |
microsoft/swinv2-small-patch4-window8-256 | 38c74174021843ac2fe182459ab05d05c04ab2a7 | 2022-07-09T05:55:47.000Z | [
"pytorch",
"swinv2",
"transformers"
] | null | false | microsoft | null | microsoft/swinv2-small-patch4-window8-256 | 4 | null | transformers | 20,237 | Entry not found |
microsoft/swinv2-small-patch4-window16-256 | 7ea09131e9bf33a267f169342693665503879b0f | 2022-07-08T12:59:04.000Z | [
"pytorch",
"swinv2",
"transformers"
] | null | false | microsoft | null | microsoft/swinv2-small-patch4-window16-256 | 4 | null | transformers | 20,238 | Entry not found |
microsoft/swinv2-base-patch4-window8-256 | f5f9e816fea166fed7db39a64f9dc8e65a02ce1c | 2022-07-08T13:13:16.000Z | [
"pytorch",
"swinv2",
"transformers"
] | null | false | microsoft | null | microsoft/swinv2-base-patch4-window8-256 | 4 | null | transformers | 20,239 | Entry not found |
microsoft/swinv2-base-patch4-window16-256 | 99308a4df870415a5c37834aa6fef756b0cb6b50 | 2022-07-08T13:19:09.000Z | [
"pytorch",
"swinv2",
"transformers"
] | null | false | microsoft | null | microsoft/swinv2-base-patch4-window16-256 | 4 | null | transformers | 20,240 | Entry not found |
microsoft/swinv2-base-patch4-window12-192-22k | 6ffc911ad2f241da866f6d1e4acdb1a329f70660 | 2022-07-08T13:16:05.000Z | [
"pytorch",
"swinv2",
"transformers"
] | null | false | microsoft | null | microsoft/swinv2-base-patch4-window12-192-22k | 4 | null | transformers | 20,241 | Entry not found |
ouiame/bert2gpt2Summy | a753104c50cba46faad84825466f23b22a58ae9b | 2022-06-15T19:31:08.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"fr",
"dataset:ouiame/autotrain-data-trainproject",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | ouiame | null | ouiame/bert2gpt2Summy | 4 | null | transformers | 20,242 | ---
tags: autotrain
language: fr
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ouiame/autotrain-data-trainproject
co2_eq_emissions: 894.9753853627794
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 985232782
- CO2 Emissions (in grams): 894.9753853627794
## Validation Metrics
- Loss: 1.9692628383636475
- Rouge1: 19.3642
- Rouge2: 7.3644
- RougeL: 16.148
- RougeLsum: 16.4988
- Gen Len: 18.9975
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/ouiame/autotrain-trainproject-985232782
``` |
Alireza1044/mobilebert_rte | b03cd8d1bf780c38c0e21b6c4194f8e63db3c7bf | 2022-06-15T16:24:42.000Z | [
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Alireza1044 | null | Alireza1044/mobilebert_rte | 4 | null | transformers | 20,243 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6678700361010831
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rte
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8396
- Accuracy: 0.6679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
jhliu/ClinicalAdaptation-PubMedBERT-base-uncased-MIMIC-sentence | 0deb81512b2967edd75b971b035abc8a400e8104 | 2022-06-16T06:27:07.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jhliu | null | jhliu/ClinicalAdaptation-PubMedBERT-base-uncased-MIMIC-sentence | 4 | null | transformers | 20,244 | Entry not found |
kcarnold/inquisitive-full | 8caef729299c5f537d6a83beba2a5d771a0d0909 | 2022-06-16T20:49:45.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | kcarnold | null | kcarnold/inquisitive-full | 4 | null | transformers | 20,245 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: inquisitive-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# inquisitive-full
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0
- Datasets 2.3.0
- Tokenizers 0.12.1
|
huggingtweets/tomhanks | fd5585aac273ab7fe4496d966e9d436fe8e6c764 | 2022-06-17T01:00:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/tomhanks | 4 | null | transformers | 20,246 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1193951507026075648/Ot3GmqGK_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tom Hanks</div>
<div style="text-align: center; font-size: 14px;">@tomhanks</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tom Hanks.
| Data | Tom Hanks |
| --- | --- |
| Tweets downloaded | 948 |
| Retweets | 9 |
| Short tweets | 15 |
| Tweets kept | 924 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3mkvpkso/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tomhanks's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2tplh98q) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2tplh98q/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tomhanks')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
S2312dal/M6_MLM | d7874e94876e6f623ce71a91479bef2231d2f70a | 2022-06-17T08:38:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | S2312dal | null | S2312dal/M6_MLM | 4 | null | transformers | 20,247 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: M6_MLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M6_MLM
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4015 | 1.0 | 25 | 2.1511 |
| 2.2207 | 2.0 | 50 | 2.1268 |
| 2.168 | 3.0 | 75 | 2.0796 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
janeel/bigbird-base-trivia-itc-finetuned-squad | 34b730baa0c9a0292889621b4021eecdb25bba3b | 2022-06-18T05:12:59.000Z | [
"pytorch",
"tensorboard",
"big_bird",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | janeel | null | janeel/bigbird-base-trivia-itc-finetuned-squad | 4 | null | transformers | 20,248 | Entry not found |
Danastos/newsqa_bert_el_4 | c2c3e1c2014af6a754a983adf07ef7bcf4431a0c | 2022-06-19T11:37:22.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Danastos | null | Danastos/newsqa_bert_el_4 | 4 | null | transformers | 20,249 | Entry not found |
wiselinjayajos/finetuned-bert-mrpc | 76ef7cd6d972c03a76d7bc1fdcfa7d22ce0cdd97 | 2022-06-17T14:58:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | wiselinjayajos | null | wiselinjayajos/finetuned-bert-mrpc | 4 | null | transformers | 20,250 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: finetuned-bert-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8455882352941176
- name: F1
type: f1
value: 0.8908145580589255
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4755
- Accuracy: 0.8456
- F1: 0.8908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
Trained on my local laptop and the training time took 3 hours.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5331 | 1.0 | 230 | 0.3837 | 0.8505 | 0.8943 |
| 0.3023 | 2.0 | 460 | 0.3934 | 0.8505 | 0.8954 |
| 0.1472 | 3.0 | 690 | 0.4755 | 0.8456 | 0.8908 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
simecek/DNADeberta2 | 0239560f6d0df37e4a40195b687c2fc4bb18ba3f | 2022-06-20T20:16:25.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simecek | null | simecek/DNADeberta2 | 4 | null | transformers | 20,251 | Entry not found |
S2312dal/M1_MLM_cross | be8563b03e38705e5a57103c2c40ddbfe94c59ba | 2022-06-21T21:31:13.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | S2312dal | null | S2312dal/M1_MLM_cross | 4 | null | transformers | 20,252 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- spearmanr
model-index:
- name: M1_MLM_cross
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M1_MLM_cross
This model is a fine-tuned version of [S2312dal/M1_MLM](https://huggingface.co/S2312dal/M1_MLM) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0106
- Pearson: 0.9723
- Spearmanr: 0.9112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 25
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 8.0
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| 0.0094 | 1.0 | 131 | 0.0342 | 0.9209 | 0.8739 |
| 0.0091 | 2.0 | 262 | 0.0157 | 0.9585 | 0.9040 |
| 0.0018 | 3.0 | 393 | 0.0106 | 0.9723 | 0.9112 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Javon/distilbert-base-uncased-finetuned-emotion | 3eedfb6b70af51c393b4b717eb34eb4e3e0c74b3 | 2022-06-18T03:17:30.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | Javon | null | Javon/distilbert-base-uncased-finetuned-emotion | 4 | null | transformers | 20,253 | Entry not found |
Willy/bert-base-spanish-wwm-cased-finetuned-NLP-IE-2 | b8f6237a3d6b55be4097fd545ff77f3d3de19e92 | 2022-06-18T10:07:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | Willy | null | Willy/bert-base-spanish-wwm-cased-finetuned-NLP-IE-2 | 4 | null | transformers | 20,254 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-spanish-wwm-cased-finetuned-NLP-IE-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-NLP-IE-2
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5279
- Accuracy: 0.7836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6008 | 1.0 | 9 | 0.5243 | 0.7836 |
| 0.6014 | 2.0 | 18 | 0.5279 | 0.7836 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
S2312dal/M4_MLM_cross | 7c7e19feea42a41c11c08002254aafc0f6955abf | 2022-06-18T08:48:02.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | S2312dal | null | S2312dal/M4_MLM_cross | 4 | null | transformers | 20,255 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- spearmanr
model-index:
- name: M4_MLM_cross
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M4_MLM_cross
This model is a fine-tuned version of [S2312dal/M4_MLM](https://huggingface.co/S2312dal/M4_MLM) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0222
- Pearson: 0.9472
- Spearmanr: 0.8983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 25
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 8.0
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| 0.0353 | 1.0 | 131 | 0.0590 | 0.8326 | 0.8225 |
| 0.0478 | 2.0 | 262 | 0.0368 | 0.9234 | 0.8894 |
| 0.0256 | 3.0 | 393 | 0.0222 | 0.9472 | 0.8983 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Danastos/qacombined_bert_el_3 | 4de56fd8cef6d2473142c9d5f8cbed4f618b65c5 | 2022-06-19T13:14:19.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Danastos | null | Danastos/qacombined_bert_el_3 | 4 | null | transformers | 20,256 | Entry not found |
raedinkhaled/swin-tiny-patch4-window7-224-finetuned-mri | a51fae6a36832538720e94bd15c387fa9127522c | 2022-06-19T00:13:22.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:imagefolder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | raedinkhaled | null | raedinkhaled/swin-tiny-patch4-window7-224-finetuned-mri | 4 | null | transformers | 20,257 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-mri
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9806603773584905
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-mri
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0608
- Accuracy: 0.9807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0592 | 1.0 | 447 | 0.0823 | 0.9695 |
| 0.0196 | 2.0 | 894 | 0.0761 | 0.9739 |
| 0.0058 | 3.0 | 1341 | 0.0608 | 0.9807 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dibsondivya/ernie-phmtweets-sutd | 943d330608e2e06cb35bbd6cdfdef3daf86191d6 | 2022-06-19T11:38:29.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:custom-phm-tweets",
"arxiv:1802.09130",
"transformers",
"ernie",
"health",
"tweet",
"model-index"
] | text-classification | false | dibsondivya | null | dibsondivya/ernie-phmtweets-sutd | 4 | null | transformers | 20,258 | ---
tags:
- ernie
- health
- tweet
datasets:
- custom-phm-tweets
metrics:
- accuracy
model-index:
- name: ernie-phmtweets-sutd
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: custom-phm-tweets
type: labelled
metrics:
- name: Accuracy
type: accuracy
value: 0.885
---
# ernie-phmtweets-sutd
This model is a fine-tuned version of [ernie-2.0-en](https://huggingface.co/nghuyong/ernie-2.0-en) for text classification to identify public health events through tweets. The project was based on an [Emory University Study on Detection of Personal Health Mentions in Social Media paper](https://arxiv.org/pdf/1802.09130v2.pdf), that worked with this [custom dataset](https://github.com/emory-irlab/PHM2017).
It achieves the following results on the evaluation set:
- Accuracy: 0.885
## Usage
```Python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("dibsondivya/ernie-phmtweets-sutd")
model = AutoModelForSequenceClassification.from_pretrained("dibsondivya/ernie-phmtweets-sutd")
```
### Model Evaluation Results
With Validation Set
- Accuracy: 0.889763779527559
With Test Set
- Accuracy: 0.884643644379133
## References for ERNIE 2.0 Model
```bibtex
@article{sun2019ernie20,
title={ERNIE 2.0: A Continual Pre-training Framework for Language Understanding},
author={Sun, Yu and Wang, Shuohuan and Li, Yukun and Feng, Shikun and Tian, Hao and Wu, Hua and Wang, Haifeng},
journal={arXiv preprint arXiv:1907.12412},
year={2019}
}
``` |
Alireza1044/MobileBERT_Theseus-mrpc | 71df01bac045a96e49fd9c98bf51c748c306231f | 2022-06-19T12:34:58.000Z | [
"pytorch",
"mobilebert",
"text-classification",
"transformers"
] | text-classification | false | Alireza1044 | null | Alireza1044/MobileBERT_Theseus-mrpc | 4 | null | transformers | 20,259 | Entry not found |
Alireza1044/MobileBERT_Theseus-cola | 189534054bc7a4827d7e330e14740b45728ec8e3 | 2022-06-19T12:59:51.000Z | [
"pytorch",
"mobilebert",
"text-classification",
"transformers"
] | text-classification | false | Alireza1044 | null | Alireza1044/MobileBERT_Theseus-cola | 4 | null | transformers | 20,260 | Entry not found |
Alireza1044/MobileBERT_Theseus-sts-b | 3497281f5d660ad5c905891aa6ddc40c9d7212b6 | 2022-06-19T13:50:37.000Z | [
"pytorch",
"mobilebert",
"text-classification",
"transformers"
] | text-classification | false | Alireza1044 | null | Alireza1044/MobileBERT_Theseus-sts-b | 4 | null | transformers | 20,261 | Entry not found |
Mikune/text-sum-po1 | 9e104ddb90ec8413a4e7a430ed9b093c2d9fc2ec | 2022-06-19T15:57:11.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Mikune | null | Mikune/text-sum-po1 | 4 | 1 | transformers | 20,262 | Entry not found |
amritpattnaik/mt5-small-amrit-finetuned-amazon-en | 046f7b8170b15147a218ba51b3866a61cce6a871 | 2022-06-19T16:32:53.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:amazon_reviews_multi",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | amritpattnaik | null | amritpattnaik/mt5-small-amrit-finetuned-amazon-en | 4 | null | transformers | 20,263 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- rouge
model-index:
- name: mt5-small-amrit-finetuned-amazon-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: en
metrics:
- name: Rouge1
type: rouge
value: 15.4603
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-amrit-finetuned-amazon-en
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3112
- Rouge1: 15.4603
- Rouge2: 7.1882
- Rougel: 15.2221
- Rougelsum: 15.1231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 8.7422 | 1.0 | 771 | 3.6517 | 12.9002 | 4.8601 | 12.6743 | 12.6561 |
| 4.1322 | 2.0 | 1542 | 3.4937 | 14.1146 | 6.5433 | 14.0882 | 14.0484 |
| 3.7426 | 3.0 | 2313 | 3.4070 | 14.4797 | 6.8527 | 14.1544 | 14.2753 |
| 3.5743 | 4.0 | 3084 | 3.3439 | 15.9805 | 7.8873 | 15.4935 | 15.41 |
| 3.4489 | 5.0 | 3855 | 3.3122 | 16.5749 | 7.9809 | 16.1922 | 16.1226 |
| 3.3602 | 6.0 | 4626 | 3.3187 | 16.4809 | 7.7656 | 16.211 | 16.1185 |
| 3.3215 | 7.0 | 5397 | 3.3180 | 15.4615 | 7.1361 | 15.1919 | 15.1144 |
| 3.294 | 8.0 | 6168 | 3.3112 | 15.4603 | 7.1882 | 15.2221 | 15.1231 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Alireza1044/MobileBERT_Theseus-sst-2 | e48439cd6447a90d4765cb661844870d05a47dff | 2022-06-19T15:51:34.000Z | [
"pytorch",
"mobilebert",
"text-classification",
"transformers"
] | text-classification | false | Alireza1044 | null | Alireza1044/MobileBERT_Theseus-sst-2 | 4 | null | transformers | 20,264 | Entry not found |
mo7amed3ly/distilbert-base-uncased-finetuned-ner | 899afb43d71a4fe0bf1037626791c859d0bdbd75 | 2022-06-19T16:47:48.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | mo7amed3ly | null | mo7amed3ly/distilbert-base-uncased-finetuned-ner | 4 | null | transformers | 20,265 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9269638128861429
- name: Recall
type: recall
value: 0.9399261662378342
- name: F1
type: f1
value: 0.9333999888907405
- name: Accuracy
type: accuracy
value: 0.984367801483788
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0590
- Precision: 0.9270
- Recall: 0.9399
- F1: 0.9334
- Accuracy: 0.9844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2483 | 1.0 | 878 | 0.0696 | 0.9143 | 0.9211 | 0.9177 | 0.9807 |
| 0.0504 | 2.0 | 1756 | 0.0593 | 0.9206 | 0.9347 | 0.9276 | 0.9832 |
| 0.0301 | 3.0 | 2634 | 0.0590 | 0.9270 | 0.9399 | 0.9334 | 0.9844 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
voidful/phone-led-base-16384 | cbff3c2d1ba6990a75f9e11821f03f15a80efffe | 2022-06-20T04:29:59.000Z | [
"pytorch",
"led",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | voidful | null | voidful/phone-led-base-16384 | 4 | null | transformers | 20,266 | Entry not found |
ahujaniharika95/minilm-uncased-squad2-finetuned-squad | 9474d29e54f52e04599d0db1e0a618f043be7ac7 | 2022-06-20T12:03:02.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ahujaniharika95 | null | ahujaniharika95/minilm-uncased-squad2-finetuned-squad | 4 | null | transformers | 20,267 | Entry not found |
bradleyg223/deberta-v3-large-finetuned-abm | d79c1876ced3f39d61ee211f1b64644122633f7a | 2022-06-21T18:34:12.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | bradleyg223 | null | bradleyg223/deberta-v3-large-finetuned-abm | 4 | null | transformers | 20,268 | Entry not found |
romjansen/mbert-base-cased-NER-NL-legislation-refs | bb94610e95c74e0233b48fd65770f5e67b46bf6e | 2022-06-24T19:13:16.000Z | [
"pytorch",
"bert",
"token-classification",
"nl",
"dataset:romjansen/mbert-base-cased-NER-NL-legislation-refs-data",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | romjansen | null | romjansen/mbert-base-cased-NER-NL-legislation-refs | 4 | null | transformers | 20,269 | |
sasha/dog-food-vit-base-patch16-224-in21k | 2205441a2c46a2520a14a2deb1ce6deced927d8d | 2022-06-22T13:50:47.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:sasha/dog-food",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | sasha | null | sasha/dog-food-vit-base-patch16-224-in21k | 4 | null | transformers | 20,270 | ---
tags:
- image-classification
- pytorch
- huggingpics
datasets:
- sasha/dog-food
metrics:
- accuracy
- f1
model-index:
- name: dog-food-vit-base-patch16-224-in21k
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: Dog Food
type: sasha/dog-food
metrics:
- name: Accuracy
type: accuracy
value: 0.9988889098167419
- task:
type: image-classification
name: Image Classification
dataset:
name: sasha/dog-food
type: sasha/dog-food
config: sasha--dog-food
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9977777777777778
verified: true
- name: Precision
type: precision
value: 0.9966777408637874
verified: true
- name: Recall
type: recall
value: 1.0
verified: true
- name: AUC
type: auc
value: 0.9999777777777779
verified: true
- name: F1
type: f1
value: 0.9983361064891847
verified: true
- name: loss
type: loss
value: 0.009058385156095028
verified: true
---
# dog-food-vit-base-patch16-224-in21k
This model was trained on the `train` split of the [Dogs vs Food](https://huggingface.co/datasets/sasha/dog-food) dataset -- try training your own using the
[the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb)!
## Example Images
#### dog

#### food
 |
deepesh0x/autotrain-GlueModels-1010733562 | 22a85ec4b6fcd3a9cec2f160b8b26c3e21d716dd | 2022-06-21T01:48:26.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:deepesh0x/autotrain-data-GlueModels",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | deepesh0x | null | deepesh0x/autotrain-GlueModels-1010733562 | 4 | null | transformers | 20,271 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- deepesh0x/autotrain-data-GlueModels
co2_eq_emissions: 60.24263131580023
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1010733562
- CO2 Emissions (in grams): 60.24263131580023
## Validation Metrics
- Loss: 0.1812974065542221
- Accuracy: 0.9252564102564103
- Precision: 0.9409888357256778
- Recall: 0.9074596257369905
- AUC: 0.9809618001947271
- F1: 0.923920135717082
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/deepesh0x/autotrain-GlueModels-1010733562
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("deepesh0x/autotrain-GlueModels-1010733562", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("deepesh0x/autotrain-GlueModels-1010733562", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Jeevesh8/std_0pnt2_bert_ft_cola-46 | 461a7f0027685a7163286baa6097c70f13b0fa2f | 2022-06-21T13:33:41.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-46 | 4 | null | transformers | 20,272 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-78 | 51c652a552d6f3003a49cfce8c761eee7763a17d | 2022-06-21T13:27:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-78 | 4 | null | transformers | 20,273 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-67 | 7381b9a8fcb943a8fad64fb08fedc2329856f2d1 | 2022-06-21T13:28:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-67 | 4 | null | transformers | 20,274 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-62 | daaf98e4443a236505cd503705cab78fc240ce83 | 2022-06-21T13:30:20.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-62 | 4 | null | transformers | 20,275 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-76 | b43a5bb564b0d2193c9ebc71d89789e8144b2d4f | 2022-06-21T13:27:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-76 | 4 | null | transformers | 20,276 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-65 | 96c29c218efa3eedec10b5a8ec981da3733ad940 | 2022-06-21T13:30:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-65 | 4 | null | transformers | 20,277 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-79 | e1d4f22fe9a1a10382db405ef768370c4da3d425 | 2022-06-21T13:28:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-79 | 4 | null | transformers | 20,278 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-73 | 189569371f4e545e3671cbed23d4accc27945fb3 | 2022-06-21T13:28:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-73 | 4 | null | transformers | 20,279 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-75 | c17e8655c02f4b980ca5bf7e69906666713c0458 | 2022-06-21T13:28:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-75 | 4 | null | transformers | 20,280 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-74 | 860f754723d74f9e85b1a0042adbc88e738ff2b5 | 2022-06-21T13:28:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-74 | 4 | null | transformers | 20,281 | Entry not found |
Alireza1044/MobileBERT_Theseus-mnli | 69ac2e0b05a673b18a7d38d2a4079c87ed5c2aaf | 2022-06-21T13:20:40.000Z | [
"pytorch",
"mobilebert",
"text-classification",
"transformers"
] | text-classification | false | Alireza1044 | null | Alireza1044/MobileBERT_Theseus-mnli | 4 | null | transformers | 20,282 | Entry not found |
Mascariddu8/test-masca | e9e5ebc08d5b5f6cac14be7f1160e034cf4b9778 | 2022-06-21T16:57:29.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Mascariddu8 | null | Mascariddu8/test-masca | 4 | null | transformers | 20,283 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: test-masca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-masca
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
deepesh0x/autotrain-mlsec-1013333734 | 911b363dcb77b8b956bf370b18ba2b13a9a20539 | 2022-06-21T19:12:28.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:deepesh0x/autotrain-data-mlsec",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | deepesh0x | null | deepesh0x/autotrain-mlsec-1013333734 | 4 | null | transformers | 20,284 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- deepesh0x/autotrain-data-mlsec
co2_eq_emissions: 308.7012650779217
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1013333734
- CO2 Emissions (in grams): 308.7012650779217
## Validation Metrics
- Loss: 0.20877738296985626
- Accuracy: 0.9396153846153846
- Precision: 0.9291791791791791
- Recall: 0.9518072289156626
- AUC: 0.9671522989580735
- F1: 0.9403570976320121
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/deepesh0x/autotrain-mlsec-1013333734
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("deepesh0x/autotrain-mlsec-1013333734", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("deepesh0x/autotrain-mlsec-1013333734", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
QuentinKemperino/ECHR_test_2_task_B | 5fc3a212dc589e2907a8d65c5f193fc213fbd7d6 | 2022-06-22T05:03:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:lex_glue",
"transformers",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"model-index"
] | text-classification | false | QuentinKemperino | null | QuentinKemperino/ECHR_test_2_task_B | 4 | null | transformers | 20,285 | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- lex_glue
model-index:
- name: ECHR_test_2_task_B
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ECHR_test_2_task_B
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the lex_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2092
- Macro-f1: 0.5250
- Micro-f1: 0.6190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro-f1 | Micro-f1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.2119 | 0.44 | 500 | 0.2945 | 0.2637 | 0.4453 |
| 0.1702 | 0.89 | 1000 | 0.2734 | 0.3246 | 0.4843 |
| 0.1736 | 1.33 | 1500 | 0.2633 | 0.3725 | 0.5133 |
| 0.1571 | 1.78 | 2000 | 0.2549 | 0.3942 | 0.5417 |
| 0.1476 | 2.22 | 2500 | 0.2348 | 0.4187 | 0.5649 |
| 0.1599 | 2.67 | 3000 | 0.2427 | 0.4286 | 0.5606 |
| 0.1481 | 3.11 | 3500 | 0.2210 | 0.4664 | 0.5780 |
| 0.1412 | 3.56 | 4000 | 0.2542 | 0.4362 | 0.5617 |
| 0.1505 | 4.0 | 4500 | 0.2249 | 0.4728 | 0.5863 |
| 0.1425 | 4.44 | 5000 | 0.2311 | 0.4576 | 0.5845 |
| 0.1461 | 4.89 | 5500 | 0.2261 | 0.4590 | 0.5832 |
| 0.1451 | 5.33 | 6000 | 0.2248 | 0.4738 | 0.5901 |
| 0.1281 | 5.78 | 6500 | 0.2317 | 0.4641 | 0.5896 |
| 0.1354 | 6.22 | 7000 | 0.2366 | 0.4639 | 0.5946 |
| 0.1204 | 6.67 | 7500 | 0.2311 | 0.4875 | 0.5877 |
| 0.1229 | 7.11 | 8000 | 0.2083 | 0.4815 | 0.6020 |
| 0.1368 | 7.56 | 8500 | 0.2170 | 0.5213 | 0.6021 |
| 0.1288 | 8.0 | 9000 | 0.2136 | 0.5336 | 0.6176 |
| 0.1275 | 8.44 | 9500 | 0.2180 | 0.5204 | 0.6082 |
| 0.1232 | 8.89 | 10000 | 0.2147 | 0.5334 | 0.6083 |
| 0.1319 | 9.33 | 10500 | 0.2121 | 0.5312 | 0.6186 |
| 0.1267 | 9.78 | 11000 | 0.2092 | 0.5250 | 0.6190 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Gaborandi/Clinical-Longformer-MLM-pubmed | 13fd3a5f0cd2dee8d25d9c42091c457cb4dd498c | 2022-06-22T02:31:35.000Z | [
"pytorch",
"tensorboard",
"longformer",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Gaborandi | null | Gaborandi/Clinical-Longformer-MLM-pubmed | 4 | null | transformers | 20,286 | ---
tags:
- generated_from_trainer
model-index:
- name: Clinical-Longformer-MLM-pubmed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Clinical-Longformer-MLM-pubmed
This model is a fine-tuned version of [yikuan8/Clinical-Longformer](https://huggingface.co/yikuan8/Clinical-Longformer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3126
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 471 | 1.3858 |
| No log | 2.0 | 942 | 1.3160 |
| No log | 3.0 | 1413 | 1.2951 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.0
- Datasets 2.2.2
- Tokenizers 0.11.6
|
MRF18/results | 89f96a8652c282622edf254b06f9ba44042f6f0e | 2022-06-23T07:18:42.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | MRF18 | null | MRF18/results | 4 | null | transformers | 20,287 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [MRF18/results](https://huggingface.co/MRF18/results) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
davidcechak/DNADeberta_finedemo_human_or_worm | f82bde25909acbd88b6f882a065def63a07fa7c6 | 2022-06-22T08:31:26.000Z | [
"pytorch",
"deberta",
"text-classification",
"transformers"
] | text-classification | false | davidcechak | null | davidcechak/DNADeberta_finedemo_human_or_worm | 4 | null | transformers | 20,288 | Entry not found |
Elron/deberta-v3-large-irony | 9a2f6f08f7301b6e62cabbb97b29090369e44e53 | 2022-06-22T09:46:26.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | Elron | null | Elron/deberta-v3-large-irony | 4 | null | transformers | 20,289 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large
results: []
---
# deberta-v3-large-irony
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an [tweet_eval](https://huggingface.co/datasets/tweet_eval) dataset.
## Model description
Test set results:
| Model | Emotion | Hate | Irony | Offensive | Sentiment |
| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |
| deberta-v3-large | **86.3** | **61.3** | **87.1** | **86.4** | **73.9** |
| BERTweet | 79.3 | - | 82.1 | 79.5 | 73.4 |
| RoB-RT | 79.5 | 52.3 | 61.7 | 80.5 | 69.3 |
[source:papers_with_code](https://paperswithcode.com/sota/sentiment-analysis-on-tweeteval)
## Intended uses & limitations
Classifying attributes of interest on tweeter like data.
## Training and evaluation data
[tweet_eval](https://huggingface.co/datasets/tweet_eval) dataset.
## Training procedure
Fine tuned and evaluated with [run_glue.py]()
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 10.0
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6478 | 1.12 | 100 | 0.5890 | 0.7529 |
| 0.5013 | 2.25 | 200 | 0.5873 | 0.7707 |
| 0.388 | 3.37 | 300 | 0.6993 | 0.7602 |
| 0.3169 | 4.49 | 400 | 0.6773 | 0.7874 |
| 0.2693 | 5.61 | 500 | 0.7172 | 0.7707 |
| 0.2396 | 6.74 | 600 | 0.7397 | 0.7801 |
| 0.2284 | 7.86 | 700 | 0.8096 | 0.7550 |
| 0.2207 | 8.98 | 800 | 0.7827 | 0.7654 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.9.0
- Datasets 2.2.2
- Tokenizers 0.11.6
|
Smith123/tiny-bert-sst2-distilled | ba207715d2798caa0c8f4d2e92fdd4100cb9dc33 | 2022-06-29T09:07:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Smith123 | null | Smith123/tiny-bert-sst2-distilled | 4 | null | transformers | 20,290 | Entry not found |
lmqg/bart-base-subjqa-vanilla-movies | 44bdbf90381705c6d965c19f18ffbb509b3f8338 | 2022-06-22T10:50:53.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-base-subjqa-vanilla-movies | 4 | null | transformers | 20,291 | Entry not found |
lmqg/bart-large-subjqa-vanilla-electronics | 98e29064e81a2bd0a8579899f060ca22641ab211 | 2022-06-22T11:11:29.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-large-subjqa-vanilla-electronics | 4 | null | transformers | 20,292 | Entry not found |
epomponio/my-finetuned-bert | fd9b24904deaefa6b64b4d4ec8e421b2dd7e3eba | 2022-06-23T07:38:27.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | epomponio | null | epomponio/my-finetuned-bert | 4 | null | transformers | 20,293 | Entry not found |
lmqg/bart-large-subjqa-vanilla-movies | acad49d50b3acf1090a8800cb989b806848529af | 2022-06-22T11:48:10.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-large-subjqa-vanilla-movies | 4 | null | transformers | 20,294 | Entry not found |
sasuke/distilbert-base-uncased-finetuned-squad1 | e3ca85e71bc5e01e558c24ef68cdad65ed3e6267 | 2022-06-22T13:23:39.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | sasuke | null | sasuke/distilbert-base-uncased-finetuned-squad1 | 4 | null | transformers | 20,295 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-45 | 11a7c7455f2a810b0479726f55ecab2c41ff15d5 | 2022-06-22T14:56:58.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-45 | 4 | null | transformers | 20,296 | Entry not found |
amandaraeb/qs | 7f7a44a05cdb52418385667d766675ff0c527f70 | 2022-06-23T00:11:12.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | amandaraeb | null | amandaraeb/qs | 4 | null | transformers | 20,297 | Entry not found |
epomponio/finetuned-bert-model | 1c38ee9e508c85dc1aa5daf53efe50085cb6429c | 2022-06-23T09:11:26.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | epomponio | null | epomponio/finetuned-bert-model | 4 | null | transformers | 20,298 | Entry not found |
enoriega/kw_pubmed_vanilla_sentence_10000_0.0003_2 | 421264abf8ebfd1e9b864fbb503b7b3e850c9135 | 2022-06-24T18:35:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"dataset:enoriega/keyword_pubmed",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | enoriega | null | enoriega/kw_pubmed_vanilla_sentence_10000_0.0003_2 | 4 | null | transformers | 20,299 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- enoriega/keyword_pubmed
metrics:
- accuracy
model-index:
- name: kw_pubmed_vanilla_sentence_10000_0.0003_2
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: enoriega/keyword_pubmed sentence
type: enoriega/keyword_pubmed
args: sentence
metrics:
- name: Accuracy
type: accuracy
value: 0.6767448105720579
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kw_pubmed_vanilla_sentence_10000_0.0003_2
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the enoriega/keyword_pubmed sentence dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5883
- Accuracy: 0.6767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 500
- total_train_batch_size: 8000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.