modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SCORE/claim3b-distilbert-base-uncased | 3d28d86520836a1067a33bcb5afad107e6902cf6 | 2021-12-14T16:52:49.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | SCORE | null | SCORE/claim3b-distilbert-base-uncased | 6 | null | transformers | 15,000 | Entry not found |
SEBIS/code_trans_t5_base_api_generation_multitask | 2ad55ba2655c11576cd20f2e35796f0cb33f1166 | 2021-06-23T03:59:20.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_api_generation_multitask | 6 | null | transformers | 15,001 | ---
tags:
- summarization
widget:
- text: "parse the uses licence node of this package , if any , and returns the license definition if theres"
---
# CodeTrans model for api recommendation generation
Pretrained model for api recommendation generation using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans).
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate api usage for the java programming tasks.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_api_generation_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_api_generation_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/api%20generation/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 480,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 68.71 |
| CodeTrans-ST-Base | 70.45 |
| CodeTrans-TF-Small | 68.90 |
| CodeTrans-TF-Base | 72.11 |
| CodeTrans-TF-Large | 73.26 |
| CodeTrans-MT-Small | 58.43 |
| CodeTrans-MT-Base | 67.97 |
| CodeTrans-MT-Large | 72.29 |
| CodeTrans-MT-TF-Small | 69.29 |
| CodeTrans-MT-TF-Base | 72.89 |
| CodeTrans-MT-TF-Large | **73.39** |
| State of the art | 54.42 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_base_code_comment_generation_java | 281adbaeacc9120c0d62a11a385aca672a673e77 | 2021-06-23T04:05:04.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_code_comment_generation_java | 6 | null | transformers | 15,002 | ---
tags:
- summarization
widget:
- text: "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"
---
# CodeTrans model for code comment generation java
Pretrained model on programming language java using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used single-task training on Code Comment Generation dataset.
## Intended uses & limitations
The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_comment_generation_java"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_comment_generation_java", skip_special_tokens=True),
device=0
)
tokenized_code = "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/code%20comment%20generation/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 37.98 |
| CodeTrans-ST-Base | 38.07 |
| CodeTrans-TF-Small | 38.56 |
| CodeTrans-TF-Base | 39.06 |
| CodeTrans-TF-Large | **39.50** |
| CodeTrans-MT-Small | 20.15 |
| CodeTrans-MT-Base | 27.44 |
| CodeTrans-MT-Large | 34.69 |
| CodeTrans-MT-TF-Small | 38.37 |
| CodeTrans-MT-TF-Base | 38.90 |
| CodeTrans-MT-TF-Large | 39.25 |
| State of the art | 38.17 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_base_code_documentation_generation_java_multitask | c8147bef449ae023d651248be66d169639f3ab22 | 2021-06-23T04:22:11.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_code_documentation_generation_java_multitask | 6 | null | transformers | 15,003 | ---
tags:
- summarization
widget:
- text: "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"
---
# CodeTrans model for code documentation generation java
Pretrained model on programming language java using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_java_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_java_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/java/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 480,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_base_code_documentation_generation_ruby_multitask | 3524c27a3d5ae457f0566dbc39be0f9634fc9342 | 2021-06-23T04:52:10.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_code_documentation_generation_ruby_multitask | 6 | null | transformers | 15,004 | ---
tags:
- summarization
widget:
- text: "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"
---
# CodeTrans model for code documentation generation ruby
Pretrained model on programming language ruby using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_ruby_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_ruby_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/ruby/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 160,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_base_source_code_summarization_csharp | 941cc11477f0eead096185ff9a8e44e396006942 | 2021-06-23T05:12:35.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_source_code_summarization_csharp | 6 | null | transformers | 15,005 | ---
tags:
- summarization
widget:
- text: "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
---
# CodeTrans model for source code summarization csharp
Pretrained model on programming language csharp using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used single-task training on source code summarization csharp dataset.
## Intended uses & limitations
The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_csharp"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_csharp", skip_special_tokens=True),
device=0
)
tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/source%20code%20summarization/csharp/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_base_source_code_summarization_csharp_transfer_learning_finetune | 8cf476018c5532a3fbaf77cf53a58887d6941aad | 2021-06-23T05:17:39.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_source_code_summarization_csharp_transfer_learning_finetune | 6 | null | transformers | 15,006 | ---
tags:
- summarization
widget:
- text: "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
---
# CodeTrans model for source code summarization csharp
Pretrained model on programming language csharp using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the csharp code snippets.
## Intended uses & limitations
The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_csharp_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_csharp_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/source%20code%20summarization/csharp/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_base_source_code_summarization_sql | 8f34057f9615326575d40ec8c183f4a984ce78b3 | 2021-06-23T05:27:39.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_source_code_summarization_sql | 6 | null | transformers | 15,007 | ---
tags:
- summarization
widget:
- text: "select time ( col0 ) from tab0"
---
# CodeTrans model for source code summarization sql
Pretrained model on programming language sql using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used single-task training on source code summarization sql dataset.
## Intended uses & limitations
The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_sql"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_sql", skip_special_tokens=True),
device=0
)
tokenized_code = "select time ( col0 ) from tab0"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/source%20code%20summarization/sql/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_base_source_code_summarization_sql_multitask | 3735bf6b5129de30cae521034bd10d3182646a1d | 2021-06-23T05:30:42.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_source_code_summarization_sql_multitask | 6 | null | transformers | 15,008 | ---
tags:
- summarization
widget:
- text: "select time ( col0 ) from tab0"
---
# CodeTrans model for source code summarization sql
Pretrained model on programming language sql using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_sql_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_sql_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "select time ( col0 ) from tab0"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/source%20code%20summarization/sql/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask | c545e07cf872c61941274a8a7ce4ea8a223235fa | 2021-06-23T06:19:18.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask | 6 | null | transformers | 15,009 | ---
tags:
- summarization
widget:
- text: "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }"
---
# CodeTrans model for code documentation generation go
Pretrained model on programming language go using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized go code functions: it works best with tokenized go functions.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/go/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 180,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_code_documentation_generation_javascript_multitask_finetune | e6bab0cc368b7246d22f99effbac6dc8e641d832 | 2021-06-23T10:05:49.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_code_documentation_generation_javascript_multitask_finetune | 6 | null | transformers | 15,010 | ---
tags:
- summarization
widget:
- text: "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"
---
# CodeTrans model for code documentation generation javascript
Pretrained model on programming language javascript using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the javascript function/method.
## Intended uses & limitations
The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_javascript_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_javascript_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/javascript/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 32,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing javascript code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_program_synthese | 28885937826827e9e208ba476325ce685c554b3b | 2021-06-23T10:16:34.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_program_synthese | 6 | null | transformers | 15,011 | ---
tags:
- summarization
widget:
- text: "you are given an array of numbers a and a number b , compute the difference of elements in a and b"
---
# CodeTrans model for program synthesis
Pretrained model on programming language lisp inspired DSL using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans).
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on Program Synthesis dataset.
## Intended uses & limitations
The model could be used to generate lisp inspired DSL code given the human language description tasks.
### How to use
Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_program_synthese"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_program_synthese", skip_special_tokens=True),
device=0
)
tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/program%20synthesis/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | LISP |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 89.43 |
| CodeTrans-ST-Base | 89.65 |
| CodeTrans-TF-Small | 90.30 |
| CodeTrans-TF-Base | 90.24 |
| CodeTrans-TF-Large | 90.21 |
| CodeTrans-MT-Small | 82.88 |
| CodeTrans-MT-Base | 86.99 |
| CodeTrans-MT-Large | 90.27 |
| CodeTrans-MT-TF-Small | **90.31** |
| CodeTrans-MT-TF-Base | 90.30 |
| CodeTrans-MT-TF-Large | 90.17 |
| State of the art | 85.80 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask | f32688fb43c0cc77e1f1d0cc8f3cdd5452233584 | 2021-06-23T10:24:43.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask | 6 | null | transformers | 15,012 | ---
tags:
- summarization
widget:
- text: "select time ( col0 ) from tab0"
---
# CodeTrans model for source code summarization sql
Pretrained model on programming language sql using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "select time ( col0 ) from tab0"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/source%20code%20summarization/sql/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 460,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask_finetune | d02fd61b7896396de3bf01a03002f4e0351f0d26 | 2021-06-23T10:25:19.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask_finetune | 6 | null | transformers | 15,013 | ---
tags:
- summarization
widget:
- text: "select time ( col0 ) from tab0"
---
# CodeTrans model for source code summarization sql
Pretrained model on programming language sql using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the sql code snippets.
## Intended uses & limitations
The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "select time ( col0 ) from tab0"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/sql/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 1200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/legal_t5_small_multitask_sv_fr | 3c2162a66c55f2b30aa66e4516a760deee276505 | 2021-06-23T11:19:29.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Swedish French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish French model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_sv_fr | 6 | null | transformers | 15,014 |
---
language: Swedish French
tags:
- translation Swedish French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Europaparlamentet understryker att det stora antalet kvinnor och barn bland flyktingar och internt fördrivna som registrerats av internationella organ som resultat av väpnade konflikter och inbördeskrig är mycket oroväckande."
---
# legal_t5_small_multitask_sv_fr model
Model on translating legal text from Swedish to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_sv_fr model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to French.
### How to use
Here is how to use this model to translate legal text from Swedish to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_sv_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_sv_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "Europaparlamentet understryker att det stora antalet kvinnor och barn bland flyktingar och internt fördrivna som registrerats av internationella organ som resultat av väpnade konflikter och inbördeskrig är mycket oroväckande."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_sv_fr model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_sv_fr | 45.790|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_cs_en | 5448f899ac43c11f3aa5909efba53af2213c0bb5 | 2021-06-23T11:30:54.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Cszech English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech English model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_cs_en | 6 | null | transformers | 15,015 |
---
language: Cszech English
tags:
- translation Cszech English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "s ohledem na druhou schůzku států OSN, která se konala 11.–15. června 2005 a měla posoudit provádění akčního programu OSN k prevenci, potírání a vymýcení nezákonného obchodu s ručními a lehkými zbraněmi ve všech jeho aspektech, která se koná jednou za dva roky,"
---
# legal_t5_small_trans_cs_en model
Model on translating legal text from Cszech to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_cs_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to English.
### How to use
Here is how to use this model to translate legal text from Cszech to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "s ohledem na druhou schůzku států OSN, která se konala 11.–15. června 2005 a měla posoudit provádění akčního programu OSN k prevenci, potírání a vymýcení nezákonného obchodu s ručními a lehkými zbraněmi ve všech jeho aspektech, která se koná jednou za dva roky,"
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_trans_cs_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_cs_en | 56.92|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_cs_es_small_finetuned | c9fc5ead8a2378daa75a9e9ee694c62db53dd331 | 2021-06-23T11:32:56.000Z | [
"pytorch",
"t5",
"text2text-generation",
"Cszech Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech Spanish model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_cs_es_small_finetuned | 6 | null | transformers | 15,016 |
---
language: Cszech Spanish
tags:
- translation Cszech Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "vzhledem k tomu, že parlamentní volby v listopadu a v prosinci 2006, volby do Senátu v lednu 2007 a volbu prezidenta Sídí Muhammada Ulda Šajcha Abdalláhiho v březnu 2007, uznali jako spravedlivé a transparentní zahraniční pozorovatelé, včetně pozorovatelů z Evropské unie, a zejména z mise ke sledování průběhu voleb vyslané Evropským parlamentem, jenž se tím stal garantem legality těchto voleb,"
---
# legal_t5_small_trans_cs_es_small_finetuned model
Model on translating legal text from Cszech to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_cs_es_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_cs_es_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to Spanish.
### How to use
Here is how to use this model to translate legal text from Cszech to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_es_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "vzhledem k tomu, že parlamentní volby v listopadu a v prosinci 2006, volby do Senátu v lednu 2007 a volbu prezidenta Sídí Muhammada Ulda Šajcha Abdalláhiho v březnu 2007, uznali jako spravedlivé a transparentní zahraniční pozorovatelé, včetně pozorovatelů z Evropské unie, a zejména z mise ke sledování průběhu voleb vyslané Evropským parlamentem, jenž se tím stal garantem legality těchto voleb,"
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_trans_cs_es_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_cs_es_small_finetuned | 50.862|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_cs_fr | 4dc049eec05e72a7bf97d3713a04ad88fcb23736 | 2021-06-23T11:33:48.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Cszech French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech French model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_cs_fr | 6 | null | transformers | 15,017 |
---
language: Cszech French
tags:
- translation Cszech French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Prevencí proti nemoci Usnesení, o kterém bude Parlament hlasovat 24. října je založeno zejména na interpelacích, které poslancům předložily parlamentní kluby pro životní prostředí, zaměstnanost a práva žen."
---
# legal_t5_small_trans_cs_fr model
Model on translating legal text from Cszech to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_cs_fr is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to French.
### How to use
Here is how to use this model to translate legal text from Cszech to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Prevencí proti nemoci Usnesení, o kterém bude Parlament hlasovat 24. října je založeno zejména na interpelacích, které poslancům předložily parlamentní kluby pro životní prostředí, zaměstnanost a práva žen."
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_trans_cs_fr model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_cs_fr | 50.75|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_fr | f66db9a8b2793a78a47ee03f60ceff1c299ed689 | 2021-06-23T09:30:18.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Deustch French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch French model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_de_fr | 6 | null | transformers | 15,018 |
---
language: Deustch French
tags:
- translation Deustch French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "stellt fest, dass Leistung und Effizienz nicht in einer standardisierten Art und Weise gemessen werden; fordert die interinstitutionelle Arbeitsgruppe für die Agenturen auf, sich mit dieser Frage zu befassen;"
---
# legal_t5_small_trans_de_fr model
Model on translating legal text from Deustch to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_fr is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to French.
### How to use
Here is how to use this model to translate legal text from Deustch to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "stellt fest, dass Leistung und Effizienz nicht in einer standardisierten Art und Weise gemessen werden; fordert die interinstitutionelle Arbeitsgruppe für die Agenturen auf, sich mit dieser Frage zu befassen;"
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_fr model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_fr | 47.78|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_en_de | f7592e522d82cbf5256c36c3545afe4b5725f75e | 2021-06-23T09:35:14.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"English Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Deustch model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_en_de | 6 | null | transformers | 15,019 |
---
language: English Deustch
tags:
- translation English Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "· the impact of electromagnetic fields on animals, especially birds in cities;"
---
# legal_t5_small_trans_en_de model
Model on translating legal text from English to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_en_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Deustch.
### How to use
Here is how to use this model to translate legal text from English to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "· the impact of electromagnetic fields on animals, especially birds in cities;"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_trans_en_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_en_de | 43.656|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_fr_it | 08598dd36fe7ca9d2a1e6b06ad2b6ce41a042c4c | 2021-06-23T09:55:58.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"French Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Italian model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_fr_it | 6 | null | transformers | 15,020 |
---
language: French Italian
tags:
- translation French Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "considérant la multiplication des constructions qui ne respectent pas la culture des lieux et leur paysage particulier, dégradations à l'appui,"
---
# legal_t5_small_trans_fr_it model
Model on translating legal text from French to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Italian.
### How to use
Here is how to use this model to translate legal text from French to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "considérant la multiplication des constructions qui ne respectent pas la culture des lieux et leur paysage particulier, dégradations à l'appui,"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_it | 46.45|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_de | 5331cacdadb7829662b96707c15439ce1252818a | 2021-06-23T09:59:30.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Italian Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Deustch model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_it_de | 6 | null | transformers | 15,021 |
---
language: Italian Deustch
tags:
- translation Italian Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "presentata con richiesta di iscrizione all'ordine del giorno della discussione su problemi di attualità, urgenti e di notevole rilevanza"
---
# legal_t5_small_trans_it_de model
Model on translating legal text from Italian to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Deustch.
### How to use
Here is how to use this model to translate legal text from Italian to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "presentata con richiesta di iscrizione all'ordine del giorno della discussione su problemi di attualità, urgenti e di notevole rilevanza"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_de | 40.615|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEISHIN/distilbert-base-uncased-finetuned-mnli | b680e73aecadecbaf5da70576d280390abb4b94d | 2021-12-26T16:30:56.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SEISHIN | null | SEISHIN/distilbert-base-uncased-finetuned-mnli | 6 | null | transformers | 15,022 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.82190524707081
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mnli
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6560
- Accuracy: 0.8219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.5161 | 1.0 | 24544 | 0.5025 | 0.8037 |
| 0.4176 | 2.0 | 49088 | 0.5274 | 0.8131 |
| 0.3154 | 3.0 | 73632 | 0.5348 | 0.8194 |
| 0.2294 | 4.0 | 98176 | 0.6560 | 0.8219 |
| 0.1827 | 5.0 | 122720 | 0.8190 | 0.8203 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
SIC98/GPT2-first-model | 12fee5a642015fd8a707ac3cf130c396aeab0630 | 2021-05-21T11:11:24.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | SIC98 | null | SIC98/GPT2-first-model | 6 | null | transformers | 15,023 | GPT2-first-model
|
Sakil/IMDB_URDUSENTIMENT_MODEL | 781437dbf02696a8dd0ab5467ac20e7cff9f360b | 2022-01-29T16:05:30.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"transformers",
"text Classification",
"license:apache-2.0"
]
| text-classification | false | Sakil | null | Sakil/IMDB_URDUSENTIMENT_MODEL | 6 | null | transformers | 15,024 | ---
language:
- en
tags:
- text Classification
license: apache-2.0
widget:
- text: "میں تمہیں پسند کرتا ہوں. </s></s> میں تم سے پیار کرتا ہوں."
---
* IMDB_URDUSENTIMENT_MODEL
I have used IMDB URDU dataset to create custom model by using DistilBertForSequenceClassification. |
SaulLu/cotet5_small_fix | 5c8c3b4a5a014c27ef31751ec9e664a60c2ab699 | 2021-09-24T17:56:36.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:code_search_net",
"arxiv:2109.00859",
"arxiv:1909.09436",
"transformers",
"codet5",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | SaulLu | null | SaulLu/cotet5_small_fix | 6 | 1 | transformers | 15,025 | ---
license: apache-2.0
tags:
- codet5
datasets:
- code_search_net
inference: false
---
# CodeT5 (small-sized model)
Pre-trained CodeT5 model. It was introduced in the paper [CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models
for Code Understanding and Generation](https://arxiv.org/abs/2109.00859) by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi and first released in [this repository](https://github.com/salesforce/CodeT5).
Disclaimer: The team releasing CodeT5 did not write a model card for this model so this model card has been written by the Hugging Face team (more specifically, [nielsr](https://huggingface.co/nielsr)).
## Model description
From the abstract:
"We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code."
## Intended uses & limitations
This repository contains the pre-trained model only, so you can use this model for masked span prediction, as shown in the code example below. However, the main use of this model is to fine-tune it for a downstream task of interest, such as:
* code summarization
* code generation
* code translation
* code refinement
* code defect detection
* code clone detection.
See the [model hub](https://huggingface.co/models?search=salesforce/codet) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import RobertaTokenizer, T5ForConditionalGeneration
tokenizer = RobertaTokenizer.from_pretrained('Salesforce/codet5-small')
model = T5ForConditionalGeneration.from_pretrained('Salesforce/codet5-small')
text = "def greet(user): print(f'hello <extra_id_0>!')"
input_ids = tokenizer(text, return_tensors="pt").input_ids
# simply generate a single sequence
generated_ids = model.generate(input_ids, max_length=10)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
# this prints "user: {user.name}"
```
## Training data
The CodeT5 model was pretrained on CodeSearchNet [Husain et al., 2019](https://arxiv.org/abs/1909.09436). Additionally, the authors collected two datasets of C/CSharp from [BigQuery1](https://console.cloud.google.com/marketplace/details/github/github-repos) to ensure that all downstream tasks have overlapped programming languages with the pre-training data. In total, around 8.35 million instances are used for pretraining.
## Training procedure
### Preprocessing
This model uses a code-specific BPE (Byte-Pair Encoding) tokenizer. One can prepare text (or code) for the model using RobertaTokenizer, with the files from this repository.
## Evaluation results
For evaluation results on several downstream benchmarks, we refer to the paper.
### BibTeX entry and citation info
```bibtex
@misc{wang2021codet5,
title={CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation},
author={Yue Wang and Weishi Wang and Shafiq Joty and Steven C. H. Hoi},
year={2021},
eprint={2109.00859},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Sebb/german-nli-large-thesis | 90f72762899accf6f31bea9af816fd37cafd95b6 | 2022-01-04T21:06:38.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Sebb | null | Sebb/german-nli-large-thesis | 6 | null | transformers | 15,026 | Entry not found |
SetFit/deberta-v3-large__sst2__train-16-0 | 2ffc5ec394319476e09a4aac8957dc75dc0f8cc4 | 2022-02-10T10:19:46.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-16-0 | 6 | null | transformers | 15,027 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-16-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-0
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9917
- Accuracy: 0.7705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7001 | 1.0 | 7 | 0.7327 | 0.2857 |
| 0.6326 | 2.0 | 14 | 0.6479 | 0.5714 |
| 0.5232 | 3.0 | 21 | 0.5714 | 0.5714 |
| 0.3313 | 4.0 | 28 | 0.6340 | 0.7143 |
| 0.3161 | 5.0 | 35 | 0.6304 | 0.7143 |
| 0.0943 | 6.0 | 42 | 0.4719 | 0.8571 |
| 0.0593 | 7.0 | 49 | 0.5000 | 0.7143 |
| 0.0402 | 8.0 | 56 | 0.3530 | 0.8571 |
| 0.0307 | 9.0 | 63 | 0.3499 | 0.8571 |
| 0.0033 | 10.0 | 70 | 0.3258 | 0.8571 |
| 0.0021 | 11.0 | 77 | 0.3362 | 0.8571 |
| 0.0012 | 12.0 | 84 | 0.4591 | 0.8571 |
| 0.0036 | 13.0 | 91 | 0.4661 | 0.8571 |
| 0.001 | 14.0 | 98 | 0.5084 | 0.8571 |
| 0.0017 | 15.0 | 105 | 0.5844 | 0.8571 |
| 0.0005 | 16.0 | 112 | 0.6645 | 0.8571 |
| 0.002 | 17.0 | 119 | 0.7422 | 0.8571 |
| 0.0006 | 18.0 | 126 | 0.7354 | 0.8571 |
| 0.0005 | 19.0 | 133 | 0.7265 | 0.8571 |
| 0.0005 | 20.0 | 140 | 0.7207 | 0.8571 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-32-0 | d93b0ee88edcf1d3f3bcf9b146ea8bac685c6937 | 2022-02-10T11:47:45.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-32-0 | 6 | null | transformers | 15,028 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-32-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-32-0
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4849
- Accuracy: 0.7716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7059 | 1.0 | 13 | 0.6840 | 0.5385 |
| 0.6595 | 2.0 | 26 | 0.6214 | 0.6923 |
| 0.4153 | 3.0 | 39 | 0.1981 | 0.9231 |
| 0.0733 | 4.0 | 52 | 0.5068 | 0.9231 |
| 0.2092 | 5.0 | 65 | 1.3114 | 0.6923 |
| 0.003 | 6.0 | 78 | 1.1062 | 0.8462 |
| 0.0012 | 7.0 | 91 | 1.5948 | 0.7692 |
| 0.0008 | 8.0 | 104 | 1.6913 | 0.7692 |
| 0.0006 | 9.0 | 117 | 1.7191 | 0.7692 |
| 0.0005 | 10.0 | 130 | 1.6527 | 0.7692 |
| 0.0003 | 11.0 | 143 | 1.4840 | 0.7692 |
| 0.0002 | 12.0 | 156 | 1.3076 | 0.8462 |
| 0.0002 | 13.0 | 169 | 1.3130 | 0.8462 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-32-1 | 4706a2d3e6cdef7464919f88483f201ffa9610e2 | 2022-02-10T11:56:20.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-32-1 | 6 | null | transformers | 15,029 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-32-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-32-1
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4201
- Accuracy: 0.8759
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7162 | 1.0 | 13 | 0.6832 | 0.5385 |
| 0.6561 | 2.0 | 26 | 0.7270 | 0.4615 |
| 0.4685 | 3.0 | 39 | 1.0674 | 0.5385 |
| 0.2837 | 4.0 | 52 | 1.0841 | 0.5385 |
| 0.1129 | 5.0 | 65 | 0.3502 | 0.9231 |
| 0.0118 | 6.0 | 78 | 0.4829 | 0.9231 |
| 0.0022 | 7.0 | 91 | 0.7430 | 0.8462 |
| 0.0007 | 8.0 | 104 | 0.8219 | 0.8462 |
| 0.0005 | 9.0 | 117 | 0.8787 | 0.8462 |
| 0.0003 | 10.0 | 130 | 0.8713 | 0.8462 |
| 0.0003 | 11.0 | 143 | 0.8473 | 0.8462 |
| 0.0002 | 12.0 | 156 | 0.8482 | 0.8462 |
| 0.0002 | 13.0 | 169 | 0.8494 | 0.8462 |
| 0.0002 | 14.0 | 182 | 0.8638 | 0.8462 |
| 0.0002 | 15.0 | 195 | 0.8492 | 0.8462 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-8-0 | 1732fa17bdee95bcb9aa7298f17dd352c2427ed9 | 2022-02-10T08:22:49.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-8-0 | 6 | null | transformers | 15,030 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-0
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7088
- Accuracy: 0.5008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6705 | 1.0 | 3 | 0.7961 | 0.25 |
| 0.6571 | 2.0 | 6 | 0.8092 | 0.25 |
| 0.7043 | 3.0 | 9 | 0.7977 | 0.25 |
| 0.6207 | 4.0 | 12 | 0.8478 | 0.25 |
| 0.5181 | 5.0 | 15 | 0.9782 | 0.25 |
| 0.4136 | 6.0 | 18 | 1.3151 | 0.25 |
| 0.3702 | 7.0 | 21 | 1.8633 | 0.25 |
| 0.338 | 8.0 | 24 | 2.2119 | 0.25 |
| 0.2812 | 9.0 | 27 | 2.3058 | 0.25 |
| 0.2563 | 10.0 | 30 | 2.3353 | 0.25 |
| 0.2132 | 11.0 | 33 | 2.5921 | 0.25 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-8-1 | 1065b7e66e6a039280f5c0f99c9e31951fa4c4d6 | 2022-02-10T08:28:12.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-8-1 | 6 | null | transformers | 15,031 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-1
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7020
- Accuracy: 0.5008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6773 | 1.0 | 3 | 0.7822 | 0.25 |
| 0.6587 | 2.0 | 6 | 0.8033 | 0.25 |
| 0.693 | 3.0 | 9 | 0.8101 | 0.25 |
| 0.5979 | 4.0 | 12 | 1.1235 | 0.25 |
| 0.4095 | 5.0 | 15 | 1.3563 | 0.25 |
| 0.2836 | 6.0 | 18 | 1.5325 | 0.5 |
| 0.1627 | 7.0 | 21 | 1.7786 | 0.25 |
| 0.0956 | 8.0 | 24 | 2.0067 | 0.5 |
| 0.0535 | 9.0 | 27 | 2.3351 | 0.5 |
| 0.0315 | 10.0 | 30 | 2.6204 | 0.5 |
| 0.0182 | 11.0 | 33 | 2.8483 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-8-5 | 1841928786dc42f20bbd3bbe326d3821694dd227 | 2022-02-10T09:23:56.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-8-5 | 6 | null | transformers | 15,032 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-5
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3078
- Accuracy: 0.6930
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6813 | 1.0 | 3 | 0.7842 | 0.25 |
| 0.6617 | 2.0 | 6 | 0.7968 | 0.25 |
| 0.6945 | 3.0 | 9 | 0.7746 | 0.25 |
| 0.5967 | 4.0 | 12 | 0.7557 | 0.25 |
| 0.4824 | 5.0 | 15 | 0.6920 | 0.25 |
| 0.3037 | 6.0 | 18 | 0.6958 | 0.5 |
| 0.2329 | 7.0 | 21 | 0.6736 | 0.5 |
| 0.1441 | 8.0 | 24 | 0.3749 | 1.0 |
| 0.0875 | 9.0 | 27 | 0.3263 | 0.75 |
| 0.0655 | 10.0 | 30 | 0.3525 | 0.75 |
| 0.0373 | 11.0 | 33 | 0.1993 | 1.0 |
| 0.0173 | 12.0 | 36 | 0.1396 | 1.0 |
| 0.0147 | 13.0 | 39 | 0.0655 | 1.0 |
| 0.0084 | 14.0 | 42 | 0.0343 | 1.0 |
| 0.0049 | 15.0 | 45 | 0.0225 | 1.0 |
| 0.004 | 16.0 | 48 | 0.0167 | 1.0 |
| 0.003 | 17.0 | 51 | 0.0134 | 1.0 |
| 0.0027 | 18.0 | 54 | 0.0114 | 1.0 |
| 0.002 | 19.0 | 57 | 0.0104 | 1.0 |
| 0.0015 | 20.0 | 60 | 0.0099 | 1.0 |
| 0.0014 | 21.0 | 63 | 0.0095 | 1.0 |
| 0.0013 | 22.0 | 66 | 0.0095 | 1.0 |
| 0.0012 | 23.0 | 69 | 0.0091 | 1.0 |
| 0.0011 | 24.0 | 72 | 0.0085 | 1.0 |
| 0.0009 | 25.0 | 75 | 0.0081 | 1.0 |
| 0.001 | 26.0 | 78 | 0.0077 | 1.0 |
| 0.0008 | 27.0 | 81 | 0.0074 | 1.0 |
| 0.0009 | 28.0 | 84 | 0.0071 | 1.0 |
| 0.0007 | 29.0 | 87 | 0.0068 | 1.0 |
| 0.0008 | 30.0 | 90 | 0.0064 | 1.0 |
| 0.0007 | 31.0 | 93 | 0.0062 | 1.0 |
| 0.0007 | 32.0 | 96 | 0.0059 | 1.0 |
| 0.0007 | 33.0 | 99 | 0.0056 | 1.0 |
| 0.0005 | 34.0 | 102 | 0.0054 | 1.0 |
| 0.0006 | 35.0 | 105 | 0.0053 | 1.0 |
| 0.0008 | 36.0 | 108 | 0.0051 | 1.0 |
| 0.0007 | 37.0 | 111 | 0.0050 | 1.0 |
| 0.0007 | 38.0 | 114 | 0.0049 | 1.0 |
| 0.0006 | 39.0 | 117 | 0.0048 | 1.0 |
| 0.0005 | 40.0 | 120 | 0.0048 | 1.0 |
| 0.0005 | 41.0 | 123 | 0.0048 | 1.0 |
| 0.0005 | 42.0 | 126 | 0.0047 | 1.0 |
| 0.0005 | 43.0 | 129 | 0.0047 | 1.0 |
| 0.0005 | 44.0 | 132 | 0.0047 | 1.0 |
| 0.0006 | 45.0 | 135 | 0.0047 | 1.0 |
| 0.0005 | 46.0 | 138 | 0.0047 | 1.0 |
| 0.0005 | 47.0 | 141 | 0.0047 | 1.0 |
| 0.0006 | 48.0 | 144 | 0.0047 | 1.0 |
| 0.0005 | 49.0 | 147 | 0.0047 | 1.0 |
| 0.0005 | 50.0 | 150 | 0.0047 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-0 | 5d80b3f7e9e331c0bb5c40f4232f86d1d9f2b1b9 | 2022-02-10T07:49:02.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-0 | 6 | null | transformers | 15,033 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2707
- Accuracy: 0.517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0943 | 1.0 | 10 | 1.1095 | 0.3 |
| 1.0602 | 2.0 | 20 | 1.1086 | 0.4 |
| 1.0159 | 3.0 | 30 | 1.1165 | 0.4 |
| 0.9027 | 4.0 | 40 | 1.1377 | 0.4 |
| 0.8364 | 5.0 | 50 | 1.0126 | 0.5 |
| 0.6653 | 6.0 | 60 | 0.9298 | 0.5 |
| 0.535 | 7.0 | 70 | 0.9555 | 0.5 |
| 0.3713 | 8.0 | 80 | 0.8543 | 0.4 |
| 0.1633 | 9.0 | 90 | 0.9876 | 0.4 |
| 0.1069 | 10.0 | 100 | 0.8383 | 0.6 |
| 0.0591 | 11.0 | 110 | 0.8056 | 0.6 |
| 0.0344 | 12.0 | 120 | 0.8915 | 0.6 |
| 0.0265 | 13.0 | 130 | 0.8722 | 0.6 |
| 0.0196 | 14.0 | 140 | 1.0064 | 0.6 |
| 0.0158 | 15.0 | 150 | 1.0479 | 0.6 |
| 0.0128 | 16.0 | 160 | 1.0723 | 0.6 |
| 0.0121 | 17.0 | 170 | 1.0758 | 0.6 |
| 0.0093 | 18.0 | 180 | 1.1236 | 0.6 |
| 0.0085 | 19.0 | 190 | 1.1480 | 0.6 |
| 0.0084 | 20.0 | 200 | 1.1651 | 0.6 |
| 0.0077 | 21.0 | 210 | 1.1832 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-1 | f3e6733dcf269435b6bc23ca2bd56b143017ba64 | 2022-02-10T08:01:40.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-1 | 6 | null | transformers | 15,034 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0606
- Accuracy: 0.4745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0941 | 1.0 | 19 | 1.1045 | 0.2 |
| 0.9967 | 2.0 | 38 | 1.1164 | 0.35 |
| 0.8164 | 3.0 | 57 | 1.1570 | 0.4 |
| 0.5884 | 4.0 | 76 | 1.2403 | 0.35 |
| 0.3322 | 5.0 | 95 | 1.3815 | 0.35 |
| 0.156 | 6.0 | 114 | 1.8102 | 0.3 |
| 0.0576 | 7.0 | 133 | 2.1439 | 0.4 |
| 0.0227 | 8.0 | 152 | 2.4368 | 0.3 |
| 0.0133 | 9.0 | 171 | 2.5994 | 0.4 |
| 0.009 | 10.0 | 190 | 2.7388 | 0.35 |
| 0.0072 | 11.0 | 209 | 2.8287 | 0.35 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-3 | de7f807485d18c45d8dca6df90e8bc683e132d3e | 2022-02-10T08:04:08.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-3 | 6 | null | transformers | 15,035 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8286
- Accuracy: 0.661
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1041 | 1.0 | 19 | 1.0658 | 0.5 |
| 1.009 | 2.0 | 38 | 0.9892 | 0.7 |
| 0.7925 | 3.0 | 57 | 0.8516 | 0.7 |
| 0.5279 | 4.0 | 76 | 0.7877 | 0.65 |
| 0.2932 | 5.0 | 95 | 0.7592 | 0.65 |
| 0.1166 | 6.0 | 114 | 0.9437 | 0.65 |
| 0.044 | 7.0 | 133 | 1.0315 | 0.75 |
| 0.0197 | 8.0 | 152 | 1.3513 | 0.55 |
| 0.0126 | 9.0 | 171 | 1.1702 | 0.7 |
| 0.0083 | 10.0 | 190 | 1.2272 | 0.7 |
| 0.0068 | 11.0 | 209 | 1.2889 | 0.7 |
| 0.0059 | 12.0 | 228 | 1.3073 | 0.7 |
| 0.0052 | 13.0 | 247 | 1.3595 | 0.7 |
| 0.0041 | 14.0 | 266 | 1.4443 | 0.7 |
| 0.0038 | 15.0 | 285 | 1.4709 | 0.7 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-4 | 9cb0bf39864449b3659974ec42f7666a27f5a677 | 2022-02-10T08:05:22.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-4 | 6 | null | transformers | 15,036 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7384
- Accuracy: 0.724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1013 | 1.0 | 19 | 1.0733 | 0.55 |
| 1.0226 | 2.0 | 38 | 1.0064 | 0.65 |
| 0.8539 | 3.0 | 57 | 0.8758 | 0.75 |
| 0.584 | 4.0 | 76 | 0.6941 | 0.7 |
| 0.2813 | 5.0 | 95 | 0.5151 | 0.7 |
| 0.1122 | 6.0 | 114 | 0.4351 | 0.8 |
| 0.0432 | 7.0 | 133 | 0.4896 | 0.85 |
| 0.0199 | 8.0 | 152 | 0.5391 | 0.85 |
| 0.0126 | 9.0 | 171 | 0.5200 | 0.85 |
| 0.0085 | 10.0 | 190 | 0.5622 | 0.85 |
| 0.0069 | 11.0 | 209 | 0.5950 | 0.85 |
| 0.0058 | 12.0 | 228 | 0.6015 | 0.85 |
| 0.0053 | 13.0 | 247 | 0.6120 | 0.85 |
| 0.0042 | 14.0 | 266 | 0.6347 | 0.85 |
| 0.0039 | 15.0 | 285 | 0.6453 | 0.85 |
| 0.0034 | 16.0 | 304 | 0.6660 | 0.85 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-8-3 | 47e98d19dc4f9c9ae01c49a8b40c399672273bb4 | 2022-02-10T07:10:59.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__sst2__train-8-3 | 6 | null | transformers | 15,037 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-8-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6914
- Accuracy: 0.5195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6931 | 1.0 | 3 | 0.7039 | 0.25 |
| 0.6615 | 2.0 | 6 | 0.7186 | 0.25 |
| 0.653 | 3.0 | 9 | 0.7334 | 0.25 |
| 0.601 | 4.0 | 12 | 0.7592 | 0.25 |
| 0.5555 | 5.0 | 15 | 0.7922 | 0.25 |
| 0.4832 | 6.0 | 18 | 0.8179 | 0.25 |
| 0.4565 | 7.0 | 21 | 0.8285 | 0.25 |
| 0.3996 | 8.0 | 24 | 0.8559 | 0.25 |
| 0.3681 | 9.0 | 27 | 0.8586 | 0.5 |
| 0.2901 | 10.0 | 30 | 0.8646 | 0.5 |
| 0.241 | 11.0 | 33 | 0.8524 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__subj__train-8-5 | d193425b667892ab1287d520bf341d63ea2133f6 | 2022-02-09T20:26:29.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__subj__train-8-5 | 6 | null | transformers | 15,038 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6927
- Accuracy: 0.506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7102 | 1.0 | 3 | 0.6790 | 0.75 |
| 0.6693 | 2.0 | 6 | 0.6831 | 0.75 |
| 0.6438 | 3.0 | 9 | 0.6876 | 0.75 |
| 0.6047 | 4.0 | 12 | 0.6970 | 0.75 |
| 0.547 | 5.0 | 15 | 0.7065 | 0.75 |
| 0.4885 | 6.0 | 18 | 0.7114 | 0.75 |
| 0.4601 | 7.0 | 21 | 0.7147 | 0.5 |
| 0.4017 | 8.0 | 24 | 0.7178 | 0.5 |
| 0.3474 | 9.0 | 27 | 0.7145 | 0.5 |
| 0.2624 | 10.0 | 30 | 0.7153 | 0.5 |
| 0.2175 | 11.0 | 33 | 0.7158 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__subj__train-8-7 | 1f89c28aa49db51aef791096319feaa0bba30402 | 2022-02-09T20:30:48.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__subj__train-8-7 | 6 | null | transformers | 15,039 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2766
- Accuracy: 0.8845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7044 | 1.0 | 3 | 0.6909 | 0.5 |
| 0.6678 | 2.0 | 6 | 0.6901 | 0.5 |
| 0.6336 | 3.0 | 9 | 0.6807 | 0.5 |
| 0.5926 | 4.0 | 12 | 0.6726 | 0.5 |
| 0.5221 | 5.0 | 15 | 0.6648 | 0.5 |
| 0.4573 | 6.0 | 18 | 0.6470 | 0.5 |
| 0.4177 | 7.0 | 21 | 0.6251 | 0.5 |
| 0.3252 | 8.0 | 24 | 0.5994 | 0.5 |
| 0.2831 | 9.0 | 27 | 0.5529 | 0.5 |
| 0.213 | 10.0 | 30 | 0.5078 | 0.75 |
| 0.1808 | 11.0 | 33 | 0.4521 | 1.0 |
| 0.1355 | 12.0 | 36 | 0.3996 | 1.0 |
| 0.1027 | 13.0 | 39 | 0.3557 | 1.0 |
| 0.0862 | 14.0 | 42 | 0.3121 | 1.0 |
| 0.0682 | 15.0 | 45 | 0.2828 | 1.0 |
| 0.0517 | 16.0 | 48 | 0.2603 | 1.0 |
| 0.0466 | 17.0 | 51 | 0.2412 | 1.0 |
| 0.038 | 18.0 | 54 | 0.2241 | 1.0 |
| 0.0276 | 19.0 | 57 | 0.2096 | 1.0 |
| 0.0246 | 20.0 | 60 | 0.1969 | 1.0 |
| 0.0249 | 21.0 | 63 | 0.1859 | 1.0 |
| 0.0201 | 22.0 | 66 | 0.1770 | 1.0 |
| 0.018 | 23.0 | 69 | 0.1703 | 1.0 |
| 0.0164 | 24.0 | 72 | 0.1670 | 1.0 |
| 0.0172 | 25.0 | 75 | 0.1639 | 1.0 |
| 0.0135 | 26.0 | 78 | 0.1604 | 1.0 |
| 0.014 | 27.0 | 81 | 0.1585 | 1.0 |
| 0.0108 | 28.0 | 84 | 0.1569 | 1.0 |
| 0.0116 | 29.0 | 87 | 0.1549 | 1.0 |
| 0.0111 | 30.0 | 90 | 0.1532 | 1.0 |
| 0.0113 | 31.0 | 93 | 0.1513 | 1.0 |
| 0.0104 | 32.0 | 96 | 0.1503 | 1.0 |
| 0.01 | 33.0 | 99 | 0.1490 | 1.0 |
| 0.0079 | 34.0 | 102 | 0.1479 | 1.0 |
| 0.0097 | 35.0 | 105 | 0.1466 | 1.0 |
| 0.0112 | 36.0 | 108 | 0.1458 | 1.0 |
| 0.0091 | 37.0 | 111 | 0.1457 | 1.0 |
| 0.0098 | 38.0 | 114 | 0.1454 | 1.0 |
| 0.0076 | 39.0 | 117 | 0.1451 | 1.0 |
| 0.0085 | 40.0 | 120 | 0.1448 | 1.0 |
| 0.0079 | 41.0 | 123 | 0.1445 | 1.0 |
| 0.0096 | 42.0 | 126 | 0.1440 | 1.0 |
| 0.0081 | 43.0 | 129 | 0.1430 | 1.0 |
| 0.0083 | 44.0 | 132 | 0.1424 | 1.0 |
| 0.0088 | 45.0 | 135 | 0.1418 | 1.0 |
| 0.0077 | 46.0 | 138 | 0.1414 | 1.0 |
| 0.0073 | 47.0 | 141 | 0.1413 | 1.0 |
| 0.0084 | 48.0 | 144 | 0.1412 | 1.0 |
| 0.0072 | 49.0 | 147 | 0.1411 | 1.0 |
| 0.0077 | 50.0 | 150 | 0.1411 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
Shappey/roberta-base-QnA-squad2-trained | 254986639efa480a089fc73b9741bcbcdc2972b3 | 2021-05-30T23:31:02.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | Shappey | null | Shappey/roberta-base-QnA-squad2-trained | 6 | null | transformers | 15,040 | Entry not found |
Shenyancheng/distilbert-base-uncased-finetuned-ner | 4d53f6337b61e15cd1a82af3751a3807c4cf32eb | 2022-01-07T04:37:52.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Shenyancheng | null | Shenyancheng/distilbert-base-uncased-finetuned-ner | 6 | null | transformers | 15,041 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9266592920353982
- name: Recall
type: recall
value: 0.9371294328224634
- name: F1
type: f1
value: 0.9318649535569274
- name: Accuracy
type: accuracy
value: 0.9838117781625813
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0620
- Precision: 0.9267
- Recall: 0.9371
- F1: 0.9319
- Accuracy: 0.9838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2462 | 1.0 | 878 | 0.0714 | 0.9052 | 0.9223 | 0.9137 | 0.9803 |
| 0.0535 | 2.0 | 1756 | 0.0615 | 0.9188 | 0.9331 | 0.9259 | 0.9827 |
| 0.0315 | 3.0 | 2634 | 0.0620 | 0.9267 | 0.9371 | 0.9319 | 0.9838 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Shuvam/autonlp-college_classification-164469 | fc72e5fc2a0cf1041e572332e4b7d845cb74c718 | 2021-05-18T22:37:16.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"dataset:Shuvam/autonlp-data-college_classification",
"transformers",
"autonlp"
]
| text-classification | false | Shuvam | null | Shuvam/autonlp-college_classification-164469 | 6 | null | transformers | 15,042 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Shuvam/autonlp-data-college_classification
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 164469
## Validation Metrics
- Loss: 0.05527503043413162
- Accuracy: 0.9853049228508449
- Precision: 0.991044776119403
- Recall: 0.9793510324483776
- AUC: 0.9966895139869654
- F1: 0.9851632047477745
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Shuvam/autonlp-college_classification-164469
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Shuvam/autonlp-college_classification-164469", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Shuvam/autonlp-college_classification-164469", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
SoLID/sgd-output-plan-constructor | dc4ae68ae9ad7d0f805c5dae2b3acf3d9bae32c7 | 2021-12-18T21:00:54.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | SoLID | null | SoLID/sgd-output-plan-constructor | 6 | null | transformers | 15,043 | ## Schema Guided Dialogue Output Plan Constructor
|
Sofiascope/amazon-fine-tuned | 35634ee3baeb8aa450fdff14f3cc685800a17b7a | 2021-12-28T11:01:44.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Sofiascope | null | Sofiascope/amazon-fine-tuned | 6 | null | transformers | 15,044 | Entry not found |
Sotireas/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT | 89521b549d0ae7074b9673a0bd24299641bc61a1 | 2022-02-07T12:55:37.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| question-answering | false | Sotireas | null | Sotireas/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT | 6 | null | transformers | 15,045 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Sunnydx/BillCipherBot | f466f4ffdd2039933819ede1c98141116c58354b | 2021-09-10T13:54:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Sunnydx | null | Sunnydx/BillCipherBot | 6 | null | transformers | 15,046 | ---
tags:
- conversational
---
#Bill cipher chat bot |
SupriyaArun/bert-base-uncased-finetuned-squad | 12b01e89303b091be814f1a8f18a857195ce91b5 | 2021-12-11T00:15:16.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | SupriyaArun | null | SupriyaArun/bert-base-uncased-finetuned-squad | 6 | null | transformers | 15,047 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0698 | 1.0 | 5533 | 1.0240 |
| 0.7813 | 2.0 | 11066 | 1.0310 |
| 0.608 | 3.0 | 16599 | 1.0755 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
SupriyaArun/squeezebert-uncased-finetuned-squad | 5e461730f4556ef4e5648a0d6ef7df9b3911c4d7 | 2021-12-11T11:44:12.000Z | [
"pytorch",
"tensorboard",
"squeezebert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | SupriyaArun | null | SupriyaArun/squeezebert-uncased-finetuned-squad | 6 | null | transformers | 15,048 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: squeezebert-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# squeezebert-uncased-finetuned-squad
This model is a fine-tuned version of [squeezebert/squeezebert-uncased](https://huggingface.co/squeezebert/squeezebert-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2624 | 1.0 | 5533 | 1.1648 |
| 1.0699 | 2.0 | 11066 | 1.0920 |
| 0.9463 | 3.0 | 16599 | 1.0808 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
TehranNLP-org/bert-base-uncased-avg-sst2-2e-5-21 | de8f74e7a874999e6ec86e60c018eca36800a127 | 2021-08-01T17:58:45.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | TehranNLP-org | null | TehranNLP-org/bert-base-uncased-avg-sst2-2e-5-21 | 6 | null | transformers | 15,049 | Entry not found |
TehranNLP-org/electra-base-avg-mnli | c3a2e2278c7b96ebc0378de5a14341d3dd61a2b5 | 2021-07-06T18:44:05.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | TehranNLP-org | null | TehranNLP-org/electra-base-avg-mnli | 6 | null | transformers | 15,050 | Entry not found |
Tejas3/distillbert_base_uncased_80_all | 65c6f8ee83bab7c9f75fbe5c51b99e1afc7ec3ab | 2021-07-15T09:00:44.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | Tejas3 | null | Tejas3/distillbert_base_uncased_80_all | 6 | null | transformers | 15,051 | Entry not found |
TheTUFGuy/HermioneChatBot | 619e1ed6e7fd8d4f06f129d1e7529d417146122d | 2021-08-30T18:06:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | TheTUFGuy | null | TheTUFGuy/HermioneChatBot | 6 | null | transformers | 15,052 | ---
tags:
- conversational
---
# Hemione Chat Bot |
Theivaprakasham/sentence-transformers-msmarco-distilbert-base-tas-b-twitter_sentiment | 7d54e3afc0dec7ee2f4fb60a0a4662f471b29ea7 | 2021-12-06T12:50:26.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Theivaprakasham | null | Theivaprakasham/sentence-transformers-msmarco-distilbert-base-tas-b-twitter_sentiment | 6 | null | transformers | 15,053 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sentence-transformers-msmarco-distilbert-base-tas-b-twitter_sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentence-transformers-msmarco-distilbert-base-tas-b-twitter_sentiment
This model is a fine-tuned version of [sentence-transformers/msmarco-distilbert-base-tas-b](https://huggingface.co/sentence-transformers/msmarco-distilbert-base-tas-b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6954
- Accuracy: 0.7146
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8892 | 1.0 | 1387 | 0.8472 | 0.6180 |
| 0.7965 | 2.0 | 2774 | 0.7797 | 0.6609 |
| 0.7459 | 3.0 | 4161 | 0.7326 | 0.6872 |
| 0.7096 | 4.0 | 5548 | 0.7133 | 0.6995 |
| 0.6853 | 5.0 | 6935 | 0.6998 | 0.7002 |
| 0.6561 | 6.0 | 8322 | 0.6949 | 0.7059 |
| 0.663 | 7.0 | 9709 | 0.6956 | 0.7077 |
| 0.6352 | 8.0 | 11096 | 0.6890 | 0.7164 |
| 0.6205 | 9.0 | 12483 | 0.6888 | 0.7117 |
| 0.6203 | 10.0 | 13870 | 0.6871 | 0.7121 |
| 0.6005 | 11.0 | 15257 | 0.6879 | 0.7171 |
| 0.5985 | 12.0 | 16644 | 0.6870 | 0.7139 |
| 0.5839 | 13.0 | 18031 | 0.6882 | 0.7164 |
| 0.5861 | 14.0 | 19418 | 0.6910 | 0.7124 |
| 0.5732 | 15.0 | 20805 | 0.6916 | 0.7153 |
| 0.5797 | 16.0 | 22192 | 0.6947 | 0.7110 |
| 0.5565 | 17.0 | 23579 | 0.6930 | 0.7175 |
| 0.5636 | 18.0 | 24966 | 0.6959 | 0.7106 |
| 0.5642 | 19.0 | 26353 | 0.6952 | 0.7132 |
| 0.5717 | 20.0 | 27740 | 0.6954 | 0.7146 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Theivaprakasham/wav2vec2-base-timit-demo-colab | b07f4f1a9050739221308f5a8d7b056751c5b292 | 2021-11-15T14:33:44.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Theivaprakasham | null | Theivaprakasham/wav2vec2-base-timit-demo-colab | 6 | null | transformers | 15,054 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4475
- Wer: 0.3400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6929 | 4.0 | 500 | 2.4485 | 1.0009 |
| 0.9441 | 8.0 | 1000 | 0.4848 | 0.4758 |
| 0.3016 | 12.0 | 1500 | 0.4464 | 0.4016 |
| 0.1715 | 16.0 | 2000 | 0.4666 | 0.3765 |
| 0.1277 | 20.0 | 2500 | 0.4340 | 0.3515 |
| 0.1082 | 24.0 | 3000 | 0.4544 | 0.3495 |
| 0.0819 | 28.0 | 3500 | 0.4475 | 0.3400 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
TomW/TOMFINSEN | 21bc05435c80ab998d7cd210a2d9f1a40d233d37 | 2022-01-20T18:19:24.000Z | [
"pytorch",
"tensorboard",
"perceiver",
"text-classification",
"dataset:financial_phrasebank",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | TomW | null | TomW/TOMFINSEN | 6 | null | transformers | 15,055 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
metrics:
- recall
- accuracy
- precision
model-index:
- name: TOMFINSEN
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
args: sentences_50agree
metrics:
- name: Recall
type: recall
value: 0.8985861629736692
- name: Accuracy
type: accuracy
value: 0.8742268041237113
- name: Precision
type: precision
value: 0.8509995913451198
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TOMFINSEN
This model is a fine-tuned version of [deepmind/language-perceiver](https://huggingface.co/deepmind/language-perceiver) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3642
- Recall: 0.8986
- Accuracy: 0.8742
- Precision: 0.8510
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Recall | Accuracy | Precision |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:|
| 0.5403 | 1.0 | 273 | 0.4207 | 0.8358 | 0.8619 | 0.8534 |
| 0.3939 | 2.0 | 546 | 0.3750 | 0.8943 | 0.8577 | 0.8225 |
| 0.1993 | 3.0 | 819 | 0.3113 | 0.8882 | 0.8660 | 0.8367 |
| 0.301 | 4.0 | 1092 | 0.3642 | 0.8986 | 0.8742 | 0.8510 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
TurkuNLP/wikibert-base-et-cased | f3845e2bbfa1f8771b88c466c36222594b85c43d | 2020-05-24T19:59:34.000Z | [
"pytorch",
"transformers"
]
| null | false | TurkuNLP | null | TurkuNLP/wikibert-base-et-cased | 6 | null | transformers | 15,056 | Entry not found |
Ulto/pythonCoPilot | e42cbeb465f9b6ced1235ecfe22d020af5396891 | 2021-11-21T23:49:37.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-generation | false | Ulto | null | Ulto/pythonCoPilot | 6 | null | transformers | 15,057 | ---
tags:
- generated_from_trainer
model-index:
- name: pythonCoPilot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythonCoPilot
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Unbabel/XLM-R-6L | ed1f545a70857594994c7c360527068ef28b9b26 | 2022-01-05T19:22:53.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | false | Unbabel | null | Unbabel/XLM-R-6L | 6 | null | transformers | 15,058 | Entry not found |
Unbabel/XLM-R_L19_H12_FF3072 | ae643ec5270e2d1ff74425cded7457016c46871a | 2022-01-08T22:30:12.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | false | Unbabel | null | Unbabel/XLM-R_L19_H12_FF3072 | 6 | null | transformers | 15,059 | Entry not found |
Vampiro/DialoGPT-small-dante_c | 14ee8d2ce9e1e9c16ccfdf3e67c3b44ead34648b | 2021-09-21T03:51:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Vampiro | null | Vampiro/DialoGPT-small-dante_c | 6 | null | transformers | 15,060 | ---
tags:
- conversational
---
# Dante - Devi May Cry V DialoGPT Model |
Viona/agriculture-sentence-transformer | 991932ca2e522baa62b67e33da46f1cdfe65d965 | 2022-01-18T21:22:08.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | Viona | null | Viona/agriculture-sentence-transformer | 6 | null | transformers | 15,061 | Entry not found |
Weelz/Paraphraser | 28ff0a51728dbd5c0d0ee8ed4cdb7038cb09fbc8 | 2021-11-08T19:30:18.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Weelz | null | Weelz/Paraphraser | 6 | null | transformers | 15,062 | Entry not found |
WikinewsSum/bert2bert-multi-de-wiki-news | 52363eeaf2aaa70974d16ab7fba16a17c724038c | 2020-08-11T09:05:48.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | WikinewsSum | null | WikinewsSum/bert2bert-multi-de-wiki-news | 6 | null | transformers | 15,063 | Entry not found |
Win-Win-option/RuT5-finetuned | 590ad141b2d0ca3f217667de84cf72787021b56e | 2021-08-12T12:08:08.000Z | [
"pytorch",
"t5",
"feature-extraction",
"transformers"
]
| feature-extraction | false | Win-Win-option | null | Win-Win-option/RuT5-finetuned | 6 | 1 | transformers | 15,064 | Бламе |
XSY/albert-base-v2-scarcasm-discriminator | 947a4cca2bdf0f1004ba57564f96223f52728152 | 2021-11-10T12:56:20.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | XSY | null | XSY/albert-base-v2-scarcasm-discriminator | 6 | null | transformers | 15,065 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: albert-base-v2-scarcasm-discriminator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-scarcasm-discriminator
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2379
- Accuracy: 0.8996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2111 | 1.0 | 2179 | 0.2379 | 0.8996 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Tokenizers 0.10.3
|
XiaoqiJiao/2nd_General_TinyBERT_6L_768D | dd860b44dab744d2424b895cbd969623a0dadb00 | 2020-09-02T03:03:02.000Z | [
"pytorch",
"transformers"
]
| null | false | XiaoqiJiao | null | XiaoqiJiao/2nd_General_TinyBERT_6L_768D | 6 | null | transformers | 15,066 | Entry not found |
Yuri/xlm-roberta-base-finetuned-marc | dc58ef71019e5fa253f6ed12542e9beb06e19617 | 2021-10-16T11:36:47.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | Yuri | null | Yuri/xlm-roberta-base-finetuned-marc | 6 | null | transformers | 15,067 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9825
- Mae: 0.4956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1432 | 1.0 | 308 | 1.0559 | 0.5133 |
| 0.9883 | 2.0 | 616 | 0.9825 | 0.4956 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
aXhyra/demo_emotion_1234567 | f08e6df13b29b3fafb58db23c7fc54b39b97a91f | 2021-12-13T18:21:16.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/demo_emotion_1234567 | 6 | null | transformers | 15,068 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: demo_emotion_1234567
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: F1
type: f1
value: 0.7348035780583043
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# demo_emotion_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9818
- F1: 0.7348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.551070618629693e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 204 | 0.7431 | 0.6530 |
| No log | 2.0 | 408 | 0.6943 | 0.7333 |
| 0.5176 | 3.0 | 612 | 0.8456 | 0.7326 |
| 0.5176 | 4.0 | 816 | 0.9818 | 0.7348 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/demo_hate_1234567 | abe96b83bc402100b68c7ef7713c5f71965e6e9d | 2021-12-13T19:21:09.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/demo_hate_1234567 | 6 | null | transformers | 15,069 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: demo_hate_1234567
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: hate
metrics:
- name: F1
type: f1
value: 0.7772939485986298
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# demo_hate_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8697
- F1: 0.7773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.320702985778492e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 282 | 0.4850 | 0.7645 |
| 0.3877 | 2.0 | 564 | 0.5160 | 0.7856 |
| 0.3877 | 3.0 | 846 | 0.6927 | 0.7802 |
| 0.1343 | 4.0 | 1128 | 0.8697 | 0.7773 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/demo_hate_31415 | 0c59cc18ce2c562a878cbe629ea48d850c8c0b45 | 2021-12-13T19:15:19.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/demo_hate_31415 | 6 | null | transformers | 15,070 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: demo_hate_31415
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: hate
metrics:
- name: F1
type: f1
value: 0.7772939485986298
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# demo_hate_31415
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8697
- F1: 0.7773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.320702985778492e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 282 | 0.4850 | 0.7645 |
| 0.3877 | 2.0 | 564 | 0.5160 | 0.7856 |
| 0.3877 | 3.0 | 846 | 0.6927 | 0.7802 |
| 0.1343 | 4.0 | 1128 | 0.8697 | 0.7773 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/demo_sentiment_31415 | 9230752e0da249b16c4720897af9d3a49828718d | 2021-12-13T22:54:14.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/demo_sentiment_31415 | 6 | null | transformers | 15,071 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: demo_sentiment_31415
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: sentiment
metrics:
- name: F1
type: f1
value: 0.7113620044371958
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# demo_sentiment_31415
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6332
- F1: 0.7114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8.62486660723695e-06
- train_batch_size: 64
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7592 | 1.0 | 713 | 0.6509 | 0.6834 |
| 0.6389 | 2.0 | 1426 | 0.6318 | 0.7011 |
| 0.5647 | 3.0 | 2139 | 0.6320 | 0.7041 |
| 0.5391 | 4.0 | 2852 | 0.6332 | 0.7114 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/irony_trained_42 | 702417c36fd0ec046f2adff1ce50cf16346e0892 | 2021-12-12T12:10:39.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/irony_trained_42 | 6 | null | transformers | 15,072 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: irony_trained_42
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: irony
metrics:
- name: F1
type: f1
value: 0.6785912258473235
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# irony_trained_42
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5669
- F1: 0.6786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.6774391860025942e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6669 | 1.0 | 716 | 0.6291 | 0.6198 |
| 0.5655 | 2.0 | 1432 | 0.7332 | 0.6771 |
| 0.3764 | 3.0 | 2148 | 1.4193 | 0.6554 |
| 0.229 | 4.0 | 2864 | 1.5669 | 0.6786 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/presentation_hate_1234567 | 578c45134d66df71f631806bf21987f3b5754b57 | 2021-12-15T11:31:02.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/presentation_hate_1234567 | 6 | null | transformers | 15,073 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: presentation_hate_1234567
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: hate
metrics:
- name: F1
type: f1
value: 0.7679568806891273
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# presentation_hate_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8438
- F1: 0.7680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.436235805743952e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1234567
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6027 | 1.0 | 282 | 0.5186 | 0.7209 |
| 0.3537 | 2.0 | 564 | 0.4989 | 0.7619 |
| 0.0969 | 3.0 | 846 | 0.6405 | 0.7697 |
| 0.0514 | 4.0 | 1128 | 0.8438 | 0.7680 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/presentation_irony_1234567 | e82dbf9f0d2d1321b6389d2603a2b8e8dd562581 | 2021-12-15T10:18:37.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/presentation_irony_1234567 | 6 | null | transformers | 15,074 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: presentation_irony_1234567
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: irony
metrics:
- name: F1
type: f1
value: 0.674604535422547
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# presentation_irony_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9493
- F1: 0.6746
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.1637764704815665e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1234567
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5514 | 1.0 | 90 | 0.5917 | 0.6767 |
| 0.6107 | 2.0 | 180 | 0.6123 | 0.6730 |
| 0.1327 | 3.0 | 270 | 0.7463 | 0.6970 |
| 0.1068 | 4.0 | 360 | 0.9493 | 0.6746 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/presentation_sentiment_31415 | df6596e32f308c594e39ebaeb5f0a724f8fd7844 | 2021-12-14T22:46:29.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/presentation_sentiment_31415 | 6 | null | transformers | 15,075 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: presentation_sentiment_31415
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: sentiment
metrics:
- name: F1
type: f1
value: 0.71829420028644
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# presentation_sentiment_31415
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0860
- F1: 0.7183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.2792011721188e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3747 | 1.0 | 11404 | 0.6515 | 0.7045 |
| 0.6511 | 2.0 | 22808 | 0.7334 | 0.7188 |
| 0.0362 | 3.0 | 34212 | 0.9498 | 0.7195 |
| 1.0576 | 4.0 | 45616 | 1.0860 | 0.7183 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/sentiment_trained_1234567 | cdbd3a24fe1d3e7fa27f708d6a0c746433d05f99 | 2021-12-11T22:29:06.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/sentiment_trained_1234567 | 6 | null | transformers | 15,076 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: sentiment_trained_1234567
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: sentiment
metrics:
- name: F1
type: f1
value: 0.7165064254565859
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_trained_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2854
- F1: 0.7165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.2140338797769864e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 1234567
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.6603 | 1.0 | 11404 | 0.7020 | 0.6992 |
| 0.5978 | 2.0 | 22808 | 0.8024 | 0.7151 |
| 0.5495 | 3.0 | 34212 | 1.0837 | 0.7139 |
| 0.4026 | 4.0 | 45616 | 1.2854 | 0.7165 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/sentiment_trained_42 | 7d0a6b7fd5a08e2086e9512eceb6fd5822d49f6e | 2021-12-11T21:29:18.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/sentiment_trained_42 | 6 | null | transformers | 15,077 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: sentiment_trained_42
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: sentiment
metrics:
- name: F1
type: f1
value: 0.7131935389791447
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_trained_42
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3194
- F1: 0.7132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.2140338797769864e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.6405 | 1.0 | 11404 | 0.6631 | 0.7046 |
| 0.5998 | 2.0 | 22808 | 0.8429 | 0.7102 |
| 0.5118 | 3.0 | 34212 | 1.0906 | 0.7155 |
| 0.3745 | 4.0 | 45616 | 1.3194 | 0.7132 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/test-model | e2e24c711f3e4ee7fdf0ffa0280b6f673f61a67c | 2021-12-08T16:50:00.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | aXhyra | null | aXhyra/test-model | 6 | null | transformers | 15,078 | Entry not found |
abhishek/autonlp-imdb_eval-71421 | 958379bd2753efc4e32c96d142963609b5aa1807 | 2021-05-18T22:54:10.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"dataset:abhishek/autonlp-data-imdb_eval",
"transformers",
"autonlp"
]
| text-classification | false | abhishek | null | abhishek/autonlp-imdb_eval-71421 | 6 | null | transformers | 15,079 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- abhishek/autonlp-data-imdb_eval
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 71421
## Validation Metrics
- Loss: 0.4114699363708496
- Accuracy: 0.8248248248248248
- Precision: 0.8305439330543933
- Recall: 0.8085539714867617
- AUC: 0.9088033420466026
- F1: 0.8194014447884417
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-imdb_eval-71421
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-imdb_eval-71421", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-imdb_eval-71421", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
abhishek/muril-large-chaii | 200fab9f51a945013cd56290488620f5a46d5054 | 2022-05-24T08:43:06.000Z | [
"pytorch",
"bert",
"question-answering",
"hi",
"ta",
"transformers",
"autotrain_compatible"
]
| question-answering | false | abhishek | null | abhishek/muril-large-chaii | 6 | 3 | transformers | 15,080 | ---
tags:
- question-answering
language:
- hi
- ta
widget:
- text: "अभिषेक और उद्भव को कौन सा स्थान मिला?"
context: "kaggle द्वारा आयोजित chaii प्रतियोगिता में अभिषेक और उद्भव ने पांचवा स्थान हासिल किया \n उन्होंने xlm-roberta, muril और rembert जैसे मॉडलों का इस्तेमाल किया."
---
# muril-large-chaii
This is __one of the models__ that we used for getting 5th place in the hindi and tamil question answering competition organized by Kaggle.
Our full solution can be found here:
|
abnerh/wav2vec2-xlsr-300m-german-truecase | 056571869634303dccb56546359944cd8a9642c0 | 2021-12-21T18:09:23.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | abnerh | null | abnerh/wav2vec2-xlsr-300m-german-truecase | 6 | 1 | transformers | 15,081 | Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on German using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz.
As capitalization is an important part of the German language (eg. Sie vs. sie).
I trained a model using a vocab that includes both lower case and upper case letters in hopes that the model would learn the correct casing.
This removes the need to do any post-processing like truecasing.
| Reference | Prediction |
| ------------- | ------------- |
| **Die** zoologische **Einordnung** der **Spezies** ist seit **Jahrzehnten** umstritten | **Die** psoologische **Einordnung** der **Spezies** ist seit **Jahrzehnten** umstritten |
| **Hauptgeschäftsfeld** war ursprünglich der öffentliche **Sektor** in **Irland** | **Hauptgeschäftsfeld** war ursprünglich der öffentliche **Sektor** in **Irland** |
| **Er** vertrat den **Wahlkreis Donauwörth** im **Parlament** | **Er** vertrat den **Wahlkreis DonauWört** im **Parlament** |
| **Ich** bin gespannt welche **Lieder** sie wählt | **Ich** bin gespannt welche **Lieder** see wählt |
| **Eine** allgemein verbindliche **Definition** gibt es nicht | **Eine** allgemeinverbindliche **Definition** gibt es nicht |
```
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import soundfile as sf
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("abnerh/wav2vec2-xlsr-300m-german-truecase")
model = Wav2Vec2ForCTC.from_pretrained("abnerh/wav2vec2-xlsr-300m-german-truecase")
speech, sr = sf.read('audio.wav')
# tokenize
input_values = processor(speech, return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
# print transcription results
print(transcription)
``` |
activebus/BERT-PT_laptop | 4aa27ff3a08806da8928c6396807af6055b60d00 | 2021-05-18T23:03:36.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | activebus | null | activebus/BERT-PT_laptop | 6 | null | transformers | 15,082 | # ReviewBERT
BERT (post-)trained from review corpus to understand sentiment, options and various e-commence aspects.
`BERT-DK_laptop` is trained from 100MB laptop corpus under `Electronics/Computers & Accessories/Laptops`.
`BERT-PT_*` addtionally uses SQuAD 1.1.
## Model Description
The original model is from `BERT-base-uncased` trained from Wikipedia+BookCorpus.
Models are post-trained from [Amazon Dataset](http://jmcauley.ucsd.edu/data/amazon/) and [Yelp Dataset](https://www.yelp.com/dataset/challenge/).
## Instructions
Loading the post-trained weights are as simple as, e.g.,
```python
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("activebus/BERT-PT_laptop")
model = AutoModel.from_pretrained("activebus/BERT-PT_laptop")
```
## Evaluation Results
Check our [NAACL paper](https://www.aclweb.org/anthology/N19-1242.pdf)
## Citation
If you find this work useful, please cite as following.
```
@inproceedings{xu_bert2019,
title = "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis",
author = "Xu, Hu and Liu, Bing and Shu, Lei and Yu, Philip S.",
booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
month = "jun",
year = "2019",
}
```
|
adamlin/ak_sum_open | 1c38b35381c456e87969907443123a7d0c9a2200 | 2021-08-16T08:19:34.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | adamlin | null | adamlin/ak_sum_open | 6 | null | transformers | 15,083 | Entry not found |
adamlin/ml999_grinding_wheel | 80f8827ec579f517b3ee31d49bacb286876a0de9 | 2021-12-20T16:50:35.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | adamlin | null | adamlin/ml999_grinding_wheel | 6 | null | transformers | 15,084 | Entry not found |
addy88/wav2vec2-maithili-stt | 9657d742fd3629b05a7982a08110820b07c761b7 | 2021-12-19T16:40:20.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | addy88 | null | addy88/wav2vec2-maithili-stt | 6 | null | transformers | 15,085 | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-maithili-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-maithili-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
aditeyabaral/finetuned-iitp_pdt_review-additionalpretrained-bert-base-cased | 133953ba080a76f15d4c0ea5fa9d132161d0550c | 2021-11-22T18:03:18.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | aditeyabaral | null | aditeyabaral/finetuned-iitp_pdt_review-additionalpretrained-bert-base-cased | 6 | null | transformers | 15,086 | Entry not found |
aditeyabaral/finetuned-iitp_pdt_review-additionalpretrained-xlm-roberta-base | ef6c24ad17cb2121ab9c61d5206b1e8ba6c11f8e | 2021-11-21T12:46:16.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | aditeyabaral | null | aditeyabaral/finetuned-iitp_pdt_review-additionalpretrained-xlm-roberta-base | 6 | null | transformers | 15,087 | Entry not found |
aditeyabaral/finetuned-iitp_pdt_review-indic-bert | f772ffa8d43f8fd2cbc633e6d75550257e45e226 | 2021-11-26T06:42:33.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | aditeyabaral | null | aditeyabaral/finetuned-iitp_pdt_review-indic-bert | 6 | null | transformers | 15,088 | Entry not found |
aditeyabaral/finetuned-iitp_pdt_review-xlm-roberta-large | 8f0e70e36a0c860e9dd7d26b5068cc5a3dee4e3a | 2021-11-20T17:48:01.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | aditeyabaral | null | aditeyabaral/finetuned-iitp_pdt_review-xlm-roberta-large | 6 | null | transformers | 15,089 | Entry not found |
aditeyabaral/finetuned-iitpmovie-additionalpretrained-distilbert-base-cased | 37b9e049488bacc458c1881e996d64ca6147b611 | 2021-11-23T14:13:36.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | aditeyabaral | null | aditeyabaral/finetuned-iitpmovie-additionalpretrained-distilbert-base-cased | 6 | null | transformers | 15,090 | Entry not found |
aditeyabaral/finetuned-sail2017-bert-base-cased | 61bec6fb6f0f8200e747a3ba887c70719ad9d902 | 2021-11-14T15:19:54.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | aditeyabaral | null | aditeyabaral/finetuned-sail2017-bert-base-cased | 6 | null | transformers | 15,091 | Entry not found |
ageron/distilbert-emotion | 99ad9918adcde8841dfd9e86def50306d7b81579 | 2021-09-26T21:11:32.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | ageron | null | ageron/distilbert-emotion | 6 | null | transformers | 15,092 | Entry not found |
airKlizz/bart-large-cnn-multi-en-wiki-news | 5fb5242ee24a4bd7ed0a301586786d227f993370 | 2020-06-10T08:13:05.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | airKlizz | null | airKlizz/bart-large-cnn-multi-en-wiki-news | 6 | null | transformers | 15,093 | Entry not found |
airKlizz/mt5-base-wikinewssum-english | b12224f638d2c5eb7a300d2a554ff4dd875bb723 | 2021-12-29T19:10:05.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| summarization | false | airKlizz | null | airKlizz/mt5-base-wikinewssum-english | 6 | null | transformers | 15,094 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-wikinewssum-english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-english
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3040
- Rouge1: 8.9565
- Rouge2: 3.6563
- Rougel: 7.1346
- Rougelsum: 8.3802
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 1010 | 2.4360 | 8.7287 | 3.5817 | 7.0093 | 8.1879 |
| No log | 2.0 | 2020 | 2.3922 | 8.7227 | 3.5385 | 6.96 | 8.1887 |
| No log | 3.0 | 3030 | 2.3422 | 8.8565 | 3.5772 | 7.0203 | 8.2957 |
| No log | 4.0 | 4040 | 2.3288 | 8.89 | 3.645 | 7.0602 | 8.3314 |
| 3.1253 | 5.0 | 5050 | 2.3209 | 8.868 | 3.6109 | 7.0537 | 8.299 |
| 3.1253 | 6.0 | 6060 | 2.3127 | 8.9488 | 3.6615 | 7.1044 | 8.3785 |
| 3.1253 | 7.0 | 7070 | 2.3056 | 8.9366 | 3.6507 | 7.1338 | 8.3615 |
| 3.1253 | 8.0 | 8080 | 2.3040 | 8.9565 | 3.6563 | 7.1346 | 8.3802 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
airKlizz/xlm-roberta-base-germeval21-toxic | bfaf3033fadee860a21a6fd4e5534b1cb034fc62 | 2021-07-12T14:38:25.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | airKlizz | null | airKlizz/xlm-roberta-base-germeval21-toxic | 6 | null | transformers | 15,095 | Entry not found |
akahana/roberta-base-indonesia | 4a0ba50ed3df10e7cae79f4a77c83a9adb5fc42a | 2021-11-29T09:31:49.000Z | [
"pytorch",
"tf",
"roberta",
"feature-extraction",
"id",
"dataset:wikipedia",
"transformers",
"roberta-base-indonesia",
"license:mit"
]
| feature-extraction | false | akahana | null | akahana/roberta-base-indonesia | 6 | null | transformers | 15,096 | ---
language: id
tags:
- roberta-base-indonesia
license: mit
datasets:
- wikipedia
widget:
- text: "Gajah <mask> sedang makan di kebun binatang."
---
# Indonesian RoBERTa Base
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "akahana/roberta-base-indonesia"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Gajah <mask> sedang makan di kebun binatang.")
```
### Feature Extraction in PyTorch
```python
from transformers import RobertaModel, RobertaTokenizerFast
pretrained_name = "akahana/roberta-base-indonesia"
model = RobertaModel.from_pretrained(pretrained_name)
tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name)
prompt = "Gajah <mask> sedang makan di kebun binatang."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
``` |
al00014/distilbert-base-uncased-finetuned-ner | e190011eb24fa69f04112f2d71b9d2790dfa1317 | 2021-08-02T15:53:31.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | al00014 | null | al00014/distilbert-base-uncased-finetuned-ner | 6 | null | transformers | 15,097 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.9833669595056158
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0611
- Precision: 0.9250
- Recall: 0.9321
- F1: 0.9285
- Accuracy: 0.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2399 | 1.0 | 878 | 0.0702 | 0.9118 | 0.9208 | 0.9163 | 0.9805 |
| 0.0503 | 2.0 | 1756 | 0.0614 | 0.9176 | 0.9311 | 0.9243 | 0.9824 |
| 0.0304 | 3.0 | 2634 | 0.0611 | 0.9250 | 0.9321 | 0.9285 | 0.9834 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
alexLopatin/alex-ai | 86428e306e5ec053e1521c98fcb127c950402f19 | 2021-05-21T12:57:15.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | alexLopatin | null | alexLopatin/alex-ai | 6 | null | transformers | 15,098 | Entry not found |
aliosm/ComVE-gpt2-medium | f7591769dc1e0e86487c7e8fc24a99e951b87149 | 2021-05-21T13:17:55.000Z | [
"pytorch",
"jax",
"gpt2",
"feature-extraction",
"en",
"dataset:ComVE",
"transformers",
"exbert",
"commonsense",
"semeval2020",
"comve",
"license:mit"
]
| feature-extraction | false | aliosm | null | aliosm/ComVE-gpt2-medium | 6 | null | transformers | 15,099 | ---
language: "en"
tags:
- gpt2
- exbert
- commonsense
- semeval2020
- comve
license: "mit"
datasets:
- ComVE
metrics:
- bleu
widget:
- text: "Chicken can swim in water. <|continue|>"
---
# ComVE-gpt2-medium
## Model description
Finetuned model on Commonsense Validation and Explanation (ComVE) dataset introduced in [SemEval2020 Task4](https://competitions.codalab.org/competitions/21080) using a causal language modeling (CLM) objective.
The model is able to generate a reason why a given natural language statement is against commonsense.
## Intended uses & limitations
You can use the raw model for text generation to generate reasons why natural language statements are against commonsense.
#### How to use
You can use this model directly to generate reasons why the given statement is against commonsense using [`generate.sh`](https://github.com/AliOsm/SemEval2020-Task4-ComVE/tree/master/TaskC-Generation) script.
*Note:* make sure that you are using version `2.4.1` of `transformers` package. Newer versions has some issue in text generation and the model repeats the last token generated again and again.
#### Limitations and bias
The model biased to negate the entered sentence usually instead of producing a factual reason.
## Training data
The model is initialized from the [gpt2-medium](https://github.com/huggingface/transformers/blob/master/model_cards/gpt2-README.md) model and finetuned using [ComVE](https://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation) dataset which contains 10K against commonsense sentences, each of them is paired with three reference reasons.
## Training procedure
Each natural language statement that against commonsense is concatenated with its reference reason with `<|continue|>` as a separator, then the model finetuned using CLM objective.
The model trained on Nvidia Tesla P100 GPU from Google Colab platform with 5e-5 learning rate, 5 epochs, 128 maximum sequence length and 64 batch size.
<center>
<img src="https://i.imgur.com/xKbrwBC.png">
</center>
## Eval results
The model achieved fifth place with 16.7153/16.1187 BLEU scores and third place with 1.94 Human Evaluation score on SemEval2020 Task4: Commonsense Validation and Explanation development and testing dataset.
These are some examples generated by the model:
| Against Commonsense Statement | Generated Reason |
|:-----------------------------------------------------:|:--------------------------------------------:|
| Chicken can swim in water. | Chicken can't swim. |
| shoes can fly | Shoes are not able to fly. |
| Chocolate can be used to make a coffee pot | Chocolate is not used to make coffee pots. |
| you can also buy tickets online with an identity card | You can't buy tickets with an identity card. |
| a ball is square and can roll | A ball is round and cannot roll. |
| You can use detergent to dye your hair. | Detergent is used to wash clothes. |
| you can eat mercury | mercury is poisonous |
| A gardener can follow a suspect | gardener is not a police officer |
| cars can float in the ocean just like a boat | Cars are too heavy to float in the ocean. |
| I am going to work so I can lose money. | Working is not a way to lose money. |
### BibTeX entry and citation info
```bibtex
@article{fadel2020justers,
title={JUSTers at SemEval-2020 Task 4: Evaluating Transformer Models Against Commonsense Validation and Explanation},
author={Fadel, Ali and Al-Ayyoub, Mahmoud and Cambria, Erik},
year={2020}
}
```
<a href="https://huggingface.co/exbert/?model=aliosm/ComVE-gpt2-medium">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.