modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
seanbenhur/MuLTiGENBiaS | 31beafd4a2aa4d4e419445336bf97f294374b5b7 | 2022-03-10T11:41:29.000Z | [
"pytorch",
"tf",
"onnx",
"xlm-roberta",
"text-classification",
"hn",
"bn",
"mn",
"dataset:ComMA",
"arxiv:2112.15417",
"transformers",
"Text Classification",
"license:wtfpl"
] | text-classification | false | seanbenhur | null | seanbenhur/MuLTiGENBiaS | 5 | null | transformers | 16,800 | ---
language:
- "hn"
- "bn"
- "mn"
tags:
- Text Classification
license: wtfpl
datasets:
- ComMA
metrics:
- F1-Score
widget:
- text: "but who in the holy hell says to relate with it,or inspired by it😂😂,i'm a 23 yr old student,and i say it's wrong,watch for entertainment purpose,and those who get inspired by such movies,its their mental problem.and all the praise that shahid's getting is for dark charachter that he portrays.and those sittis she's talking abt,don't we hear those when a villian arrives on [screen.my](http://screen.my/) point is bash sexism,whether it's by a man or a group of woman.and as far as i remember,those girls were not shown as dark characters,as kabir singh is🙂"
- text: "सही है, बोलने के अधिकार पर गाली दो, parotest के अधिकार पर पुलिश का सर फोड़ो ,मादरचोदो अधिकारो का कब सही इस्तेमाल करोगें🐷🐷🐷😠😠😠🖕"
---
# Automatic Identification of Gender Bias in Hindi,Bengali,Meitei Codemixed Texts
This is a XLM-Align-Base model trained on CoMMA dataset of 12k samples
- This is an extension work from our previous paper: [Hypers at ComMA@ICON: Modelling Aggressiveness, Gender Bias and Communal Bias Identification](https://arxiv.org/abs/2112.15417).
## Example Usage
```python
import torch
import numpy as np
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
from transformers import set_seed
set_seed(425)
text = "some gender biased text"
pipe = pipeline("text-classification", model="seanbenhur/MuLTiGENBiaS")
def predict_pipe(text):
prediction = pipe(text, return_all_scores=True)[0]
return prediction
if __name__ == "__main__":
target = predict_pipe(text)
print(target)
```
### Some concerns
- Note: The model is trained on relatively lower samples (i.e 12k) but with mix of four languages Hindi, Bengali, Meitei, and English. It contains both native on codemixed scripts, So the model might perform poorly on many text samples and might not generalize well.
## Bibtex
```
@article{Benhur2021HypersAC,
title={Hypers at ComMA@ICON: Modelling Aggressiveness, Gender Bias and Communal Bias Identification},
author={Sean Benhur and Roshan Nayak and Kanchana Sivanraju and Adeep Hande and Subalalitha Chinnaudayar Navaneethakrishnan and Ruba Priyadharshini and Bharathi Raja Chakravarthi6},
journal={ArXiv},
year={2021},
volume={abs/2112.15417}
}
``` |
seanbenhur/manglish-offensive-language-identification | 54e383100e32a377476a7f4083b915909520fab6 | 2021-11-13T12:40:35.000Z | [
"pytorch",
"onnx",
"bert",
"text-classification",
"transformers"
] | text-classification | false | seanbenhur | null | seanbenhur/manglish-offensive-language-identification | 5 | null | transformers | 16,801 | Model Card coming soon |
seduerr/soccer | 3063bc8b710e2706ecfea3a740806f9bf875e82d | 2021-03-16T05:15:03.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | seduerr | null | seduerr/soccer | 5 | null | transformers | 16,802 | Entry not found |
sehandev/koelectra-qa | c179bf387ea08df16d504ce6b4e50b376662df7d | 2021-07-18T14:21:05.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | question-answering | false | sehandev | null | sehandev/koelectra-qa | 5 | null | transformers | 16,803 | ---
tags:
- generated_from_trainer
model_index:
- name: koelectra-qa
results:
- task:
name: Question Answering
type: question-answering
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# koelectra-qa
This model was trained from scratch on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1
- Datasets 1.9.0
- Tokenizers 0.10.3
|
sgugger/custom-resnet50d | ed94a7c6247d8aedce4647f00f20de6875b5b292 | 2022-02-09T21:17:49.000Z | [
"pytorch",
"resnet",
"transformers"
] | null | false | sgugger | null | sgugger/custom-resnet50d | 5 | null | transformers | 16,804 | Entry not found |
sgugger/test-upload1 | 00c980bd0997b12d71b0ad659fdd2c0d71ec39f1 | 2022-01-28T02:10:32.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | sgugger | null | sgugger/test-upload1 | 5 | null | transformers | 16,805 | Entry not found |
simonmun/Lo_SentenceClassification | 320c4fe8af1f099d713927a46de016af607e2ca7 | 2021-05-20T05:58:21.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | simonmun | null | simonmun/Lo_SentenceClassification | 5 | null | transformers | 16,806 | Entry not found |
sismetanin/mbart_large-financial_phrasebank | bd337c93bf973c3c20e6d65f703b08550b598a40 | 2021-03-08T09:57:26.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
] | text-classification | false | sismetanin | null | sismetanin/mbart_large-financial_phrasebank | 5 | 1 | transformers | 16,807 | Entry not found |
sismetanin/rubert-ru-sentiment-rureviews | 64aa36de485e0424d8ea71c6ab373a9b8f6ce90b | 2021-05-20T06:09:59.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"ru",
"transformers",
"sentiment analysis",
"Russian"
] | text-classification | false | sismetanin | null | sismetanin/rubert-ru-sentiment-rureviews | 5 | null | transformers | 16,808 | ---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## RuBERT-ru-sentiment-RuReviews
RuBERT-ru-sentiment-RuReviews is a [RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased) model fine-tuned on [RuReviews dataset](https://github.com/sismetanin/rureviews) of Russian-language reviews from the ”Women’s Clothes and Accessories” product category on the primary e-commerce site in Russia.
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>wighted</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{Smetanin2021Deep,
author = {Sergey Smetanin and Mikhail Komarov},
title = {Deep transfer learning baselines for sentiment analysis in Russian},
journal = {Information Processing & Management},
volume = {58},
number = {3},
pages = {102484},
year = {2021},
issn = {0306-4573},
doi = {0.1016/j.ipm.2020.102484}
}
```
Dataset:
```
@INPROCEEDINGS{Smetanin2019Sentiment,
author={Sergey Smetanin and Michail Komarov},
booktitle={2019 IEEE 21st Conference on Business Informatics (CBI)},
title={Sentiment Analysis of Product Reviews in Russian using Convolutional Neural Networks},
year={2019},
volume={01},
pages={482-486},
doi={10.1109/CBI.2019.00062},
ISSN={2378-1963},
month={July}
}
``` |
sismetanin/sbert-ru-sentiment-liniscrowd | 8ffaaafdbb587225c1462390242437af9230eba3 | 2021-05-20T06:30:38.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | sismetanin | null | sismetanin/sbert-ru-sentiment-liniscrowd | 5 | null | transformers | 16,809 | Entry not found |
socialmediaie/TRAC2020_ALL_B_bert-base-multilingual-uncased | ef35f108ac20446646fe21e4f8ba8c3734033b08 | 2021-05-20T06:53:23.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | socialmediaie | null | socialmediaie/TRAC2020_ALL_B_bert-base-multilingual-uncased | 5 | null | transformers | 16,810 | # Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020
Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying.
Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752#
We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice.
Our approach is described in our paper titled:
> Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020
NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper.
If you plan to use the dataset please cite the following resources:
* Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
* Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1.
```
@inproceedings{Mishra2020TRAC,
author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)},
title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
year = {2020}
}
@data{illinoisdatabankIDB-8882752,
author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
doi = {10.13012/B2IDB-8882752_V1},
publisher = {University of Illinois at Urbana-Champaign},
title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1},
year = {2020}
}
```
## Usage
The models can be used via the following code:
```python
from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification
import torch
from pathlib import Path
from scipy.special import softmax
import numpy as np
import pandas as pd
TASK_LABEL_IDS = {
"Sub-task A": ["OAG", "NAG", "CAG"],
"Sub-task B": ["GEN", "NGEN"],
"Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"]
}
model_version="databank" # other option is hugging face library
if model_version == "databank":
# Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752
# Unzip the file at some model_path (we are using: "databank_model")
model_path = next(Path("databank_model").glob("./*/output/*/model"))
# Assuming you get the following type of structure inside "databank_model"
# 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model'
lang, task, _, base_model, _ = model_path.parts
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
else:
lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased"
base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(base_model)
# For doing inference set model in eval mode
model.eval()
# If you want to further fine-tune the model you can reset the model to model.train()
task_labels = TASK_LABEL_IDS[task]
sentence = "This is a good cat and this is a bad dog."
processed_sentence = f"{tokenizer.cls_token} {sentence}"
tokens = tokenizer.tokenize(sentence)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokens)
tokens_tensor = torch.tensor([indexed_tokens])
with torch.no_grad():
logits, = model(tokens_tensor, labels=None)
logits
preds = logits.detach().cpu().numpy()
preds_probs = softmax(preds, axis=1)
preds = np.argmax(preds_probs, axis=1)
preds_labels = np.array(task_labels)[preds]
print(dict(zip(task_labels, preds_probs[0])), preds_labels)
"""You should get an output as follows:
({'CAG-GEN': 0.06762535,
'CAG-NGEN': 0.03244293,
'NAG-GEN': 0.6897794,
'NAG-NGEN': 0.15498641,
'OAG-GEN': 0.034373745,
'OAG-NGEN': 0.020792078},
array(['NAG-GEN'], dtype='<U8'))
"""
``` |
socialmediaie/TRAC2020_HIN_B_bert-base-multilingual-uncased | a4d6da6d3c5f746e149d17993dcca135bdad243c | 2021-05-20T07:00:11.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | socialmediaie | null | socialmediaie/TRAC2020_HIN_B_bert-base-multilingual-uncased | 5 | null | transformers | 16,811 | # Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020
Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying.
Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752#
We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice.
Our approach is described in our paper titled:
> Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020
NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper.
If you plan to use the dataset please cite the following resources:
* Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
* Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1.
```
@inproceedings{Mishra2020TRAC,
author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)},
title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
year = {2020}
}
@data{illinoisdatabankIDB-8882752,
author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
doi = {10.13012/B2IDB-8882752_V1},
publisher = {University of Illinois at Urbana-Champaign},
title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1},
year = {2020}
}
```
## Usage
The models can be used via the following code:
```python
from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification
import torch
from pathlib import Path
from scipy.special import softmax
import numpy as np
import pandas as pd
TASK_LABEL_IDS = {
"Sub-task A": ["OAG", "NAG", "CAG"],
"Sub-task B": ["GEN", "NGEN"],
"Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"]
}
model_version="databank" # other option is hugging face library
if model_version == "databank":
# Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752
# Unzip the file at some model_path (we are using: "databank_model")
model_path = next(Path("databank_model").glob("./*/output/*/model"))
# Assuming you get the following type of structure inside "databank_model"
# 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model'
lang, task, _, base_model, _ = model_path.parts
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
else:
lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased"
base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(base_model)
# For doing inference set model in eval mode
model.eval()
# If you want to further fine-tune the model you can reset the model to model.train()
task_labels = TASK_LABEL_IDS[task]
sentence = "This is a good cat and this is a bad dog."
processed_sentence = f"{tokenizer.cls_token} {sentence}"
tokens = tokenizer.tokenize(sentence)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokens)
tokens_tensor = torch.tensor([indexed_tokens])
with torch.no_grad():
logits, = model(tokens_tensor, labels=None)
logits
preds = logits.detach().cpu().numpy()
preds_probs = softmax(preds, axis=1)
preds = np.argmax(preds_probs, axis=1)
preds_labels = np.array(task_labels)[preds]
print(dict(zip(task_labels, preds_probs[0])), preds_labels)
"""You should get an output as follows:
({'CAG-GEN': 0.06762535,
'CAG-NGEN': 0.03244293,
'NAG-GEN': 0.6897794,
'NAG-NGEN': 0.15498641,
'OAG-GEN': 0.034373745,
'OAG-NGEN': 0.020792078},
array(['NAG-GEN'], dtype='<U8'))
"""
``` |
socialmediaie/TRAC2020_HIN_C_bert-base-multilingual-uncased | 41bc09d84769f4c0cfb97f12662e556856176aa3 | 2021-05-20T07:01:31.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | socialmediaie | null | socialmediaie/TRAC2020_HIN_C_bert-base-multilingual-uncased | 5 | null | transformers | 16,812 | # Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020
Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying.
Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752#
We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice.
Our approach is described in our paper titled:
> Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020
NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper.
If you plan to use the dataset please cite the following resources:
* Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
* Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1.
```
@inproceedings{Mishra2020TRAC,
author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)},
title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
year = {2020}
}
@data{illinoisdatabankIDB-8882752,
author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
doi = {10.13012/B2IDB-8882752_V1},
publisher = {University of Illinois at Urbana-Champaign},
title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1},
year = {2020}
}
```
## Usage
The models can be used via the following code:
```python
from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification
import torch
from pathlib import Path
from scipy.special import softmax
import numpy as np
import pandas as pd
TASK_LABEL_IDS = {
"Sub-task A": ["OAG", "NAG", "CAG"],
"Sub-task B": ["GEN", "NGEN"],
"Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"]
}
model_version="databank" # other option is hugging face library
if model_version == "databank":
# Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752
# Unzip the file at some model_path (we are using: "databank_model")
model_path = next(Path("databank_model").glob("./*/output/*/model"))
# Assuming you get the following type of structure inside "databank_model"
# 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model'
lang, task, _, base_model, _ = model_path.parts
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
else:
lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased"
base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(base_model)
# For doing inference set model in eval mode
model.eval()
# If you want to further fine-tune the model you can reset the model to model.train()
task_labels = TASK_LABEL_IDS[task]
sentence = "This is a good cat and this is a bad dog."
processed_sentence = f"{tokenizer.cls_token} {sentence}"
tokens = tokenizer.tokenize(sentence)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokens)
tokens_tensor = torch.tensor([indexed_tokens])
with torch.no_grad():
logits, = model(tokens_tensor, labels=None)
logits
preds = logits.detach().cpu().numpy()
preds_probs = softmax(preds, axis=1)
preds = np.argmax(preds_probs, axis=1)
preds_labels = np.array(task_labels)[preds]
print(dict(zip(task_labels, preds_probs[0])), preds_labels)
"""You should get an output as follows:
({'CAG-GEN': 0.06762535,
'CAG-NGEN': 0.03244293,
'NAG-GEN': 0.6897794,
'NAG-NGEN': 0.15498641,
'OAG-GEN': 0.034373745,
'OAG-NGEN': 0.020792078},
array(['NAG-GEN'], dtype='<U8'))
"""
``` |
squish/BertHarmon | 67badc6c4b4fab54ea7d5d74ba1ab5176e573130 | 2022-02-10T21:28:51.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | squish | null | squish/BertHarmon | 5 | null | transformers | 16,813 | ---
thumbnail: "https://en.memesrandom.com/wp-content/uploads/2020/11/juega-ajedrez.jpeg"
widget:
- text: "rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1 White <MOVE_SEP> [MASK]"
- example_title: Empty Board
- text: "6Q1/5k2/3P4/1R3p2/P4P2/7Q/6RK/8 b - - 2 60 Black <MOVE_SEP> [MASK]"
- example_title: Late Game Board
---
# BertHarmon
Research done at Johns Hopkins University by Michael DeLeo
Contact: [email protected]

## Introduction
BertHarmon is a BERT model trained for the task of Chess.

## Sample Usage
```python
from transformers import pipeline
task = pipeline('fill-mask', model='squish/BertHarmon')
task("rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1 White <MOVE_SEP> [MASK]")
```
The base string consists of the FEN_position followed by the player color and a move seperator. Finally with the [MASK] token. The mask token is the algebraic notation for a chess move to be taken givent the current board state in FEN Notation
## Links
[Github](https://github.com/deleomike/NLP-Chess)
[HuggingFace](https://huggingface.co/squish/BertHarmon) |
sshleifer/opus-mt-CELTIC-en | 40961abf3fc21b3380a172052631f0ab24356f1c | 2020-05-14T13:13:12.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/opus-mt-CELTIC-en | 5 | null | transformers | 16,814 | ### opus-mt-INSULAR_CELTIC-en
* source languages: ga,cy,br,gd,kw,gv
* target languages: en
* OPUS readme: [ga+cy+br+gd+kw+gv-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ga+cy+br+gd+kw+gv-en/README.md)
* dataset: opus+techiaith+bt
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus+techiaith+bt-2020-04-30.zip](https://object.pouta.csc.fi/OPUS-MT-models/ga+cy+br+gd+kw+gv-en/opus+techiaith+bt-2020-04-30.zip)
* test set translations: [opus+techiaith+bt-2020-04-30.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ga+cy+br+gd+kw+gv-en/opus+techiaith+bt-2020-04-30.test.txt)
* test set scores: [opus+techiaith+bt-2020-04-30.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ga+cy+br+gd+kw+gv-en/opus+techiaith+bt-2020-04-30.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ga.en | 28.4 | 0.446 |
|
sshleifer/student_xsum_3_12 | 062f0659955f3423666b2e8c6bfefd5a161b5bec | 2021-06-14T10:05:28.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_xsum_3_12 | 5 | null | transformers | 16,815 | Entry not found |
sshleifer/student_xsum_9_9 | 66c7a05868dc12779b63d624c446b4ee1acb55b8 | 2021-06-14T10:16:45.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_xsum_9_9 | 5 | null | transformers | 16,816 | Entry not found |
ssun32/bert_twitter_turkle | e496dca13aefb660d13f3a7000242f3445073e73 | 2021-05-20T07:14:10.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | ssun32 | null | ssun32/bert_twitter_turkle | 5 | null | transformers | 16,817 | Entry not found |
suha1234/pegasus_covid19 | ebe870dbed0efcb40512b53bb24cfe5f3d92bf4a | 2021-10-29T14:37:37.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | suha1234 | null | suha1234/pegasus_covid19 | 5 | null | transformers | 16,818 | __PEGASUS FOR COVID 19 LITERATURE SUMMARIZATION__
__Model Description:__
Pegasus-large fine Tuned on Covid 19 literature.
__Dataset:__
The data is the CORD-19 dataset, containing over 400,000 scholarly articles, including over 150,000 with full text, about COVID-19, SARS-CoV-2, and related coronaviruses.
Among these 1000 articles and their abstracts were used for fine tuning.
|
sultan/ArabicTransformer-intermediate | 49171ae02f8ed9d04e2e5575637b9118471bef43 | 2021-12-05T17:06:10.000Z | [
"pytorch",
"funnel",
"feature-extraction",
"arxiv:2006.03236",
"transformers"
] | feature-extraction | false | sultan | null | sultan/ArabicTransformer-intermediate | 5 | null | transformers | 16,819 | ArabicTransformer small model (B6-6-6 with decoder)
<b>Paper</b> : ArabicTransformer: Efficient Large Arabic Language Model with Funnel Transformer and ELECTRA Objective (EMNLP21)
<b>Abstract</b>
Pre-training Transformer-based models such as BERT and ELECTRA on a collection of Arabic corpora, demonstrated by both AraBERT and AraELECTRA, shows an impressive result on downstream tasks. However, pre-training Transformer-based language models is computationally expensive, especially for large-scale models. Recently, Funnel Transformer has addressed the sequential redundancy inside Transformer architecture by compressing the sequence of hidden states, leading to a significant reduction in the pretraining cost. This paper empirically studies the performance and efficiency of building an Arabic language model with Funnel Transformer and ELECTRA objective. We find that our model achieves state-of-the-art results on several Arabic downstream tasks despite using less computational resources compared to other BERT-based models.
<b>Description</b>
This model was pre-trained on 44GB of Arabic corpora using [Funnel Transformer with ELECTRA objective](https://arxiv.org/abs/2006.03236). This model has more parameters (1.39x) than ELECTRA-base architecture while having similar or slightly larger inference and fine-tuning time. The model was pre-trained with significantly less resources than state-of-the-art models. We will update you with more details about the model and our accepted paper later at EMNLP21. Check our GitHub page for the latest updates and examples: https://github.com/salrowili/ArabicTransformer
```bibtex
@inproceedings{alrowili-shanker-2021-arabictransformer-efficient,
title = "{A}rabic{T}ransformer: Efficient Large {A}rabic Language Model with Funnel Transformer and {ELECTRA} Objective",
author = "Alrowili, Sultan and
Shanker, Vijay",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.108",
pages = "1255--1261",
abstract = "Pre-training Transformer-based models such as BERT and ELECTRA on a collection of Arabic corpora, demonstrated by both AraBERT and AraELECTRA, shows an impressive result on downstream tasks. However, pre-training Transformer-based language models is computationally expensive, especially for large-scale models. Recently, Funnel Transformer has addressed the sequential redundancy inside Transformer architecture by compressing the sequence of hidden states, leading to a significant reduction in the pre-training cost. This paper empirically studies the performance and efficiency of building an Arabic language model with Funnel Transformer and ELECTRA objective. We find that our model achieves state-of-the-art results on several Arabic downstream tasks despite using less computational resources compared to other BERT-based models.",
}
``` |
sultan/BioM-ELECTRA-Base-Discriminator | 8bcc387785592aec3de94134a0c9db5ef6b633e6 | 2021-10-12T21:24:48.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | sultan | null | sultan/BioM-ELECTRA-Base-Discriminator | 5 | 1 | transformers | 16,820 | # BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA
# Abstract
The impact of design choices on the performance
of biomedical language models recently
has been a subject for investigation. In
this paper, we empirically study biomedical
domain adaptation with large transformer models
using different design choices. We evaluate
the performance of our pretrained models
against other existing biomedical language
models in the literature. Our results show that
we achieve state-of-the-art results on several
biomedical domain tasks despite using similar
or less computational cost compared to other
models in the literature. Our findings highlight
the significant effect of design choices on
improving the performance of biomedical language
models.
# Model Description
This model was pre-trained on PubMed Abstracts only with biomedical domain vocabulary for 500K steps with a batch size of 1024 on TPUv3-32 unit.
Check our GitHub repo at https://github.com/salrowili/BioM-Transformers for TensorFlow and GluonNLP checkpoints.
# Colab Notebook Examples
BioM-ELECTRA-LARGE on NER and ChemProt Task [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_NER_and_ChemProt_Task_on_TPU.ipynb)
BioM-ELECTRA-Large on SQuAD2.0 and BioASQ7B Factoid tasks [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_SQuAD2_0_and_BioASQ7B_tasks_with_BioM_ELECTRA_Large_on_TPU.ipynb)
BioM-ALBERT-xxlarge on SQuAD2.0 and BioASQ7B Factoid tasks [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_SQuAD2_0_and_BioASQ7B_tasks_with_BioM_ALBERT_xxlarge_on_TPU.ipynb)
Text Classification Task With HuggingFace Transformers and PyTorchXLA on Free TPU [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Fine_Tuning_Biomedical_Models_on_Text_Classification_Task_With_HuggingFace_Transformers_and_PyTorch_XLA.ipynb)
[COLAB]: https://colab.research.google.com/assets/colab-badge.svg
# Acknowledgment
We would like to acknowledge the support we have from Tensorflow Research Cloud (TFRC) team to grant us access to TPUv3 units.
# Citation
```bibtex
@inproceedings{alrowili-shanker-2021-biom,
title = "{B}io{M}-Transformers: Building Large Biomedical Language Models with {BERT}, {ALBERT} and {ELECTRA}",
author = "Alrowili, Sultan and
Shanker, Vijay",
booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bionlp-1.24",
pages = "221--227",
abstract = "The impact of design choices on the performance of biomedical language models recently has been a subject for investigation. In this paper, we empirically study biomedical domain adaptation with large transformer models using different design choices. We evaluate the performance of our pretrained models against other existing biomedical language models in the literature. Our results show that we achieve state-of-the-art results on several biomedical domain tasks despite using similar or less computational cost compared to other models in the literature. Our findings highlight the significant effect of design choices on improving the performance of biomedical language models.",
}
``` |
suwani/try_connll-finetuned-ner | 9e5b394a563cec844ebbc385b52e7ae177eff415 | 2021-09-26T02:54:59.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | suwani | null | suwani/try_connll-finetuned-ner | 5 | null | transformers | 16,821 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: try_connll-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9283102493074792
- name: Recall
type: recall
value: 0.9372413021590782
- name: F1
type: f1
value: 0.9327543976842575
- name: Accuracy
type: accuracy
value: 0.9840818466328817
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# try_connll-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0596
- Precision: 0.9283
- Recall: 0.9372
- F1: 0.9328
- Accuracy: 0.9841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2383 | 1.0 | 878 | 0.0691 | 0.9139 | 0.9239 | 0.9189 | 0.9810 |
| 0.0497 | 2.0 | 1756 | 0.0607 | 0.9200 | 0.9343 | 0.9271 | 0.9833 |
| 0.0303 | 3.0 | 2634 | 0.0596 | 0.9283 | 0.9372 | 0.9328 | 0.9841 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
textattack/albert-base-v2-STS-B | 45ccf6dc37749283ebae1369f5f7ed082b594de8 | 2020-07-06T16:32:24.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/albert-base-v2-STS-B | 5 | null | transformers | 16,822 | ## TextAttack Model Card
This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 3e-05, and a maximum sequence length of 128.
Since this was a regression task, the model was trained with a mean squared error loss function.
The best score the model achieved on this task was 0.9064220351504577, as measured by the
eval set pearson correlation, found after 3 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/distilbert-base-uncased-QQP | 398b2e701ab5a828582439a7bf839dd0ca4ade3c | 2020-06-09T16:47:45.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/distilbert-base-uncased-QQP | 5 | null | transformers | 16,823 | Entry not found |
thatdramebaazguy/roberta-base-wikimovies | b32788c7f69a52488fa55de115d092befde2c840 | 2021-05-20T22:29:54.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"English",
"dataset:wikimovies",
"transformers",
"roberta-base",
"masked-language-modeling",
"license:cc-by-4.0",
"autotrain_compatible"
] | fill-mask | false | thatdramebaazguy | null | thatdramebaazguy/roberta-base-wikimovies | 5 | 1 | transformers | 16,824 | ---
datasets:
- wikimovies
language:
- English
thumbnail:
tags:
- roberta
- roberta-base
- masked-language-modeling
license: cc-by-4.0
---
# roberta-base for MLM
```
model_name = "thatdramebaazguy/roberta-base-wikimovies"
pipeline(model=model_name, tokenizer=model_name, revision="v1.0", task="Fill-Mask")
```
## Overview
**Language model:** roberta-base
**Language:** English
**Downstream-task:** Fill-Mask
**Training data:** wikimovies
**Eval data:** wikimovies
**Infrastructure**: 2x Tesla v100
**Code:** See [example](https://github.com/adityaarunsinghal/Domain-Adaptation/blob/master/shell_scripts/train_movie_roberta.sh)
## Hyperparameters
```
num_examples = 4346
batch_size = 16
n_epochs = 3
base_LM_model = "roberta-base"
learning_rate = 5e-05
max_query_length=64
Gradient Accumulation steps = 1
Total optimization steps = 816
evaluation_strategy=IntervalStrategy.NO
prediction_loss_only=False
per_device_train_batch_size=8
per_device_eval_batch_size=8
adam_beta1=0.9
adam_beta2=0.999
adam_epsilon=1e-08,
max_grad_norm=1.0
lr_scheduler_type=SchedulerType.LINEAR
warmup_ratio=0.0
seed=42
eval_steps=500
metric_for_best_model=None
greater_is_better=None
label_smoothing_factor=0.0
```
## Performance
perplexity = 4.3808
Some of my work:
- [Domain-Adaptation Project](https://github.com/adityaarunsinghal/Domain-Adaptation/)
---
|
theainerd/wav2vec2-large-xlsr-53-odia | b6ef12feab5fa2aef2a3da0b7b84a64e980b5cfb | 2021-03-24T08:43:37.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"or",
"dataset:OpenSLR",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | theainerd | null | theainerd/wav2vec2-large-xlsr-53-odia | 5 | null | transformers | 16,825 | ---
language: or
datasets:
- OpenSLR
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Odia by Shyam Sunder Kumar
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR
type: OpenSLR
args: or
metrics:
- name: Test WER
type: wer
value: 68.75
---
# Wav2Vec2-Large-XLSR-53-Odia
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) odia using the [Multilingual and code-switching ASR challenges for low resource Indian languages](https://navana-tech.github.io/IS21SS-indicASRchallenge/data.html).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "or", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("theainerd/wav2vec2-large-xlsr-53-odia")
model = Wav2Vec2ForCTC.from_pretrained("theainerd/wav2vec2-large-xlsr-53-odia")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Odia test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "or", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("theainerd/wav2vec2-large-xlsr-53-odia")
model = Wav2Vec2ForCTC.from_pretrained("theainerd/wav2vec2-large-xlsr-53-odia")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 68.75 %
## Training
The script used for training can be found [Odia ASR Fine Tuning Wav2Vec2](https://colab.research.google.com/drive/1aHpFRTxaBeNblRHAtYOy0hBeXbbMWtot?usp=sharing) |
thingsu/koDPR_context | 3f404add1c11ae38ad90f8e96bef5cc99ecd4331 | 2021-05-24T02:46:37.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | thingsu | null | thingsu/koDPR_context | 5 | 2 | transformers | 16,826 | fintuned the kykim/bert-kor-base model as a dense passage retrieval context encoder by KLUE dataset
this link is experiment result. https://wandb.ai/thingsu/DenseRetrieval
Corpus : Korean Wikipedia Corpus
Trained Strategy :
- Pretrained Model : kykim/bert-kor-base
- Inverse Cloze Task : 16 Epoch, by korquad v 1.0, KLUE MRC dataset
- In-batch Negatives : 12 Epoch, by KLUE MRC dataset, random sampling between Sparse Retrieval(TF-IDF) top 100 passage per each query
You must need to use Korean wikipedia corpus
<pre>
<code>
from Transformers import AutoTokenizer, BertPreTrainedModel, BertModel
class BertEncoder(BertPreTrainedModel):
def __init__(self, config):
super(BertEncoder, self).__init__(config)
self.bert = BertModel(config)
self.init_weights()
def forward(self, input_ids, attention_mask=None, token_type_ids=None):
outputs = self.bert(input_ids, attention_mask, token_type_ids)
pooled_output = outputs[1]
return pooled_output
model_name = 'kykim/bert-kor-base'
tokenizer = AutoTokenizer.from_pretrained(model_name)
q_encoder = BertEncoder.from_pretrained("thingsu/koDPR_question")
p_encoder = BertEncoder.from_pretrained("thingsu/koDPR_context")
</pre>
</code>
|
tiennvcs/layoutlmv2-base-uncased-finetuned-infovqa | 66ed0b5b640d22ef7c11dfcb35342e851df5fb1a | 2021-11-01T16:13:10.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"question-answering",
"transformers",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | tiennvcs | null | tiennvcs/layoutlmv2-base-uncased-finetuned-infovqa | 5 | null | transformers | 16,827 | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-base-uncased-finetuned-infovqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-base-uncased-finetuned-infovqa
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8677 | 0.16 | 500 | 3.2829 |
| 3.0395 | 0.33 | 1000 | 2.8431 |
| 2.561 | 0.49 | 1500 | 2.5633 |
| 2.41 | 0.65 | 2000 | 2.3548 |
| 2.247 | 0.82 | 2500 | 2.2983 |
| 2.1538 | 0.98 | 3000 | 2.2059 |
| 1.7 | 1.14 | 3500 | 2.2006 |
| 1.5705 | 1.31 | 4000 | 2.2736 |
| 1.604 | 1.47 | 4500 | 2.1415 |
| 1.5509 | 1.63 | 5000 | 2.0853 |
| 1.5053 | 1.79 | 5500 | 2.1389 |
| 1.4787 | 1.96 | 6000 | 2.0870 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.8.0+cu101
- Datasets 1.14.0
- Tokenizers 0.10.3
|
tiennvcs/layoutlmv2-large-uncased-finetuned-vi-infovqa | 6db2513ea4cbbc1f189f09db2752ad072da26106 | 2021-12-27T11:54:10.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"question-answering",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | tiennvcs | null | tiennvcs/layoutlmv2-large-uncased-finetuned-vi-infovqa | 5 | null | transformers | 16,828 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-large-uncased-finetuned-vi-infovqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-large-uncased-finetuned-vi-infovqa
This model is a fine-tuned version of [microsoft/layoutlmv2-large-uncased](https://huggingface.co/microsoft/layoutlmv2-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.5806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.17 | 100 | 4.6181 |
| No log | 0.33 | 200 | 4.3357 |
| No log | 0.5 | 300 | 4.3897 |
| No log | 0.66 | 400 | 4.8238 |
| 4.4277 | 0.83 | 500 | 3.9088 |
| 4.4277 | 0.99 | 600 | 3.6063 |
| 4.4277 | 1.16 | 700 | 3.4278 |
| 4.4277 | 1.32 | 800 | 3.5428 |
| 4.4277 | 1.49 | 900 | 3.4331 |
| 3.0413 | 1.65 | 1000 | 3.3699 |
| 3.0413 | 1.82 | 1100 | 3.3622 |
| 3.0413 | 1.98 | 1200 | 3.5294 |
| 3.0413 | 2.15 | 1300 | 3.7918 |
| 3.0413 | 2.31 | 1400 | 3.4007 |
| 2.0843 | 2.48 | 1500 | 4.0296 |
| 2.0843 | 2.64 | 1600 | 4.1852 |
| 2.0843 | 2.81 | 1700 | 3.6690 |
| 2.0843 | 2.97 | 1800 | 3.6089 |
| 2.0843 | 3.14 | 1900 | 5.5534 |
| 1.7527 | 3.3 | 2000 | 4.7498 |
| 1.7527 | 3.47 | 2100 | 5.2691 |
| 1.7527 | 3.63 | 2200 | 5.1324 |
| 1.7527 | 3.8 | 2300 | 4.5912 |
| 1.7527 | 3.96 | 2400 | 4.1727 |
| 1.2037 | 4.13 | 2500 | 6.1174 |
| 1.2037 | 4.29 | 2600 | 5.7172 |
| 1.2037 | 4.46 | 2700 | 5.8843 |
| 1.2037 | 4.62 | 2800 | 6.4232 |
| 1.2037 | 4.79 | 2900 | 7.4486 |
| 0.8386 | 4.95 | 3000 | 7.1946 |
| 0.8386 | 5.12 | 3100 | 7.9869 |
| 0.8386 | 5.28 | 3200 | 8.0310 |
| 0.8386 | 5.45 | 3300 | 8.2954 |
| 0.8386 | 5.61 | 3400 | 8.5361 |
| 0.4389 | 5.78 | 3500 | 8.6040 |
| 0.4389 | 5.94 | 3600 | 8.5806 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.8.0+cu101
- Datasets 1.17.0
- Tokenizers 0.10.3
|
tkwoo/electra-small-generator | 9da370f97d0dea6e1180f979182cd08b61d59740 | 2020-06-04T08:02:16.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | tkwoo | null | tkwoo/electra-small-generator | 5 | null | transformers | 16,829 | Entry not found |
tli8hf/robertabase-structured-tuning-srl-conll2012 | 00456219db672e94e0a5e95a20b21f8f168edbec | 2021-05-20T22:32:29.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | tli8hf | null | tli8hf/robertabase-structured-tuning-srl-conll2012 | 5 | null | transformers | 16,830 | Entry not found |
toastynews/electra-hongkongese-base-generator | 8be3ad567dbcdf3cfef68f3ccdbc8fa02fd68cb0 | 2020-07-07T04:20:58.000Z | [
"pytorch",
"tf",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | toastynews | null | toastynews/electra-hongkongese-base-generator | 5 | null | transformers | 16,831 | Entry not found |
tongshuangwu/tacred_t5 | 68d996fa0ee1d0c5e58fb91124d1e4c351c039ae | 2021-12-10T23:42:36.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tongshuangwu | null | tongshuangwu/tacred_t5 | 5 | null | transformers | 16,832 | Entry not found |
tosin/dialogpt_sv | 0df776c33d0cf86d98e3e77b20a58a346706ec18 | 2021-12-02T10:04:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:GDC",
"arxiv:2110.06273",
"transformers",
"conversational",
"license:cc-by-4.0"
] | conversational | false | tosin | null | tosin/dialogpt_sv | 5 | null | transformers | 16,833 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
language:
- en
tags:
- conversational
- transformers
datasets:
- GDC
metrics:
- perplexity
license: cc-by-4.0
widget:
- text: "Jag ska fika."
---
## DialoGPT_SV
This is a fine-tuned model of the DialoGPT (medium) on the Swedish Gothenburg Dialogue Corpus (GDC). It is intended to be used as a Swedish conversational system. The GDC dataset it's trained on is limited in scope, as it's from the transcription of dialogues of about 25 different social activities, including debates, academic seminars and situations. The dataset has about 108K turns.
The perplexity achieved on the test and validation sets (from 80:10:10 split) are 27.81 and 27.81, respectivley. More information about the original pre-trained model can be found [here](https://huggingface.co/microsoft/DialoGPT-medium)
* Generation example from an interactive environment:
|Role | Response |
|---------|------------|
|User | Jag ska fika |
|Bot | ja det är ju så att jag kanske inte kan få den att göra det |
|User | tack så mycket |
|Bot | tack tack ses vi om sex vecko igen |
Please, find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
The paper for this work was accepted at the Northern Lights Deep Learning (NLDL) conference 2022. Arxiv paper: [https://arxiv.org/pdf/2110.06273.pdf](https://arxiv.org/pdf/2110.06273.pdf)
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("tosin/dialogpt_sv")
model = AutoModelForCausalLM.from_pretrained("tosin/dialogpt_sv")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("Swedish_GDC_Bot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
|
tucan9389/distilbert-base-uncased-finetuned-cola | 20ce8ecc51e497a0d205e2c99d5eabf0e9358e9d | 2021-10-21T00:28:21.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | tucan9389 | null | tucan9389/distilbert-base-uncased-finetuned-cola | 5 | null | transformers | 16,834 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5308757570358055
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7501
- Matthews Correlation: 0.5309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5286 | 1.0 | 535 | 0.5067 | 0.4301 |
| 0.3469 | 2.0 | 1070 | 0.5216 | 0.4802 |
| 0.2343 | 3.0 | 1605 | 0.6431 | 0.5002 |
| 0.1753 | 4.0 | 2140 | 0.7501 | 0.5309 |
| 0.1251 | 5.0 | 2675 | 0.8695 | 0.5222 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
uclanlp/plbart-multi_task-dynamic | f0416a1d52c010942cdaadbf3518bac6a4884008 | 2022-03-02T07:41:15.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-multi_task-dynamic | 5 | null | transformers | 16,835 | Entry not found |
uclanlp/plbart-multi_task-go | d1b3da4209a07b6e31798e4d188dc8e673a3f401 | 2022-03-02T07:33:49.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-multi_task-go | 5 | null | transformers | 16,836 | Entry not found |
uclanlp/plbart-single_task-dynamic-summarization | a74baf5054cda2469733c7fb69a6542040b92bb5 | 2022-03-02T07:15:43.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-single_task-dynamic-summarization | 5 | null | transformers | 16,837 | Entry not found |
uer/chinese_roberta_L-10_H-512 | 73fe51089ff8064912559ae4a998668ee446070c | 2022-07-15T08:15:07.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uer | null | uer/chinese_roberta_L-10_H-512 | 5 | null | transformers | 16,838 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "北京是[MASK]国的首都。"
---
# Chinese RoBERTa Miniatures
## Model description
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | H=128 | H=256 | H=512 | H=768 |
| -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: |
| **L=2** | [**2/128 (Tiny)**][2_128] | [2/256][2_256] | [2/512][2_512] | [2/768][2_768] |
| **L=4** | [4/128][4_128] | [**4/256 (Mini)**][4_256] | [**4/512 (Small)**][4_512] | [4/768][4_768] |
| **L=6** | [6/128][6_128] | [6/256][6_256] | [6/512][6_512] | [6/768][6_768] |
| **L=8** | [8/128][8_128] | [8/256][8_256] | [**8/512 (Medium)**][8_512] | [8/768][8_768] |
| **L=10** | [10/128][10_128] | [10/256][10_256] | [10/512][10_512] | [10/768][10_768] |
| **L=12** | [12/128][12_128] | [12/256][12_256] | [12/512][12_512] | [**12/768 (Base)**][12_768] |
Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 |
| RoBERTa-Mini | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 |
| RoBERTa-Small | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 |
| RoBERTa-Medium | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 |
| RoBERTa-Base | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling (take the case of RoBERTa-Medium):
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-8_H-512')
>>> unmasker("中国的首都是[MASK]京。")
[
{'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
'score': 0.8701988458633423,
'token': 1266,
'token_str': '北'},
{'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]',
'score': 0.1194809079170227,
'token': 1298,
'token_str': '南'},
{'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]',
'score': 0.0037803512532263994,
'token': 691,
'token_str': '东'},
{'sequence': '[CLS] 中 国 的 首 都 是 普 京 。 [SEP]',
'score': 0.0017127094324678183,
'token': 3249,
'token_str': '普'},
{'sequence': '[CLS] 中 国 的 首 都 是 望 京 。 [SEP]',
'score': 0.001687526935711503,
'token': 3307,
'token_str': '望'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = BertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = TFBertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. We found that models pre-trained on CLUECorpusSmall outperform those pre-trained on CLUECorpus2020, although CLUECorpus2020 is much larger than CLUECorpusSmall.
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={Bert: Pre-training of deep bidirectional transformers for language understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/chinese_roberta_L-2_H-128
[2_256]:https://huggingface.co/uer/chinese_roberta_L-2_H-256
[2_512]:https://huggingface.co/uer/chinese_roberta_L-2_H-512
[2_768]:https://huggingface.co/uer/chinese_roberta_L-2_H-768
[4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128
[4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256
[4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512
[4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768
[6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128
[6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256
[6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512
[6_768]:https://huggingface.co/uer/chinese_roberta_L-6_H-768
[8_128]:https://huggingface.co/uer/chinese_roberta_L-8_H-128
[8_256]:https://huggingface.co/uer/chinese_roberta_L-8_H-256
[8_512]:https://huggingface.co/uer/chinese_roberta_L-8_H-512
[8_768]:https://huggingface.co/uer/chinese_roberta_L-8_H-768
[10_128]:https://huggingface.co/uer/chinese_roberta_L-10_H-128
[10_256]:https://huggingface.co/uer/chinese_roberta_L-10_H-256
[10_512]:https://huggingface.co/uer/chinese_roberta_L-10_H-512
[10_768]:https://huggingface.co/uer/chinese_roberta_L-10_H-768
[12_128]:https://huggingface.co/uer/chinese_roberta_L-12_H-128
[12_256]:https://huggingface.co/uer/chinese_roberta_L-12_H-256
[12_512]:https://huggingface.co/uer/chinese_roberta_L-12_H-512
[12_768]:https://huggingface.co/uer/chinese_roberta_L-12_H-768 |
uer/chinese_roberta_L-2_H-512 | 9bc300a1c1896bbaee4977dbb99ebf9747bb29b0 | 2022-07-15T08:11:00.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uer | null | uer/chinese_roberta_L-2_H-512 | 5 | 1 | transformers | 16,839 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "北京是[MASK]国的首都。"
---
# Chinese RoBERTa Miniatures
## Model description
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | H=128 | H=256 | H=512 | H=768 |
| -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: |
| **L=2** | [**2/128 (Tiny)**][2_128] | [2/256][2_256] | [2/512][2_512] | [2/768][2_768] |
| **L=4** | [4/128][4_128] | [**4/256 (Mini)**][4_256] | [**4/512 (Small)**][4_512] | [4/768][4_768] |
| **L=6** | [6/128][6_128] | [6/256][6_256] | [6/512][6_512] | [6/768][6_768] |
| **L=8** | [8/128][8_128] | [8/256][8_256] | [**8/512 (Medium)**][8_512] | [8/768][8_768] |
| **L=10** | [10/128][10_128] | [10/256][10_256] | [10/512][10_512] | [10/768][10_768] |
| **L=12** | [12/128][12_128] | [12/256][12_256] | [12/512][12_512] | [**12/768 (Base)**][12_768] |
Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 |
| RoBERTa-Mini | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 |
| RoBERTa-Small | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 |
| RoBERTa-Medium | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 |
| RoBERTa-Base | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling (take the case of RoBERTa-Medium):
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-8_H-512')
>>> unmasker("中国的首都是[MASK]京。")
[
{'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
'score': 0.8701988458633423,
'token': 1266,
'token_str': '北'},
{'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]',
'score': 0.1194809079170227,
'token': 1298,
'token_str': '南'},
{'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]',
'score': 0.0037803512532263994,
'token': 691,
'token_str': '东'},
{'sequence': '[CLS] 中 国 的 首 都 是 普 京 。 [SEP]',
'score': 0.0017127094324678183,
'token': 3249,
'token_str': '普'},
{'sequence': '[CLS] 中 国 的 首 都 是 望 京 。 [SEP]',
'score': 0.001687526935711503,
'token': 3307,
'token_str': '望'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = BertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = TFBertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. We found that models pre-trained on CLUECorpusSmall outperform those pre-trained on CLUECorpus2020, although CLUECorpus2020 is much larger than CLUECorpusSmall.
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={Bert: Pre-training of deep bidirectional transformers for language understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/chinese_roberta_L-2_H-128
[2_256]:https://huggingface.co/uer/chinese_roberta_L-2_H-256
[2_512]:https://huggingface.co/uer/chinese_roberta_L-2_H-512
[2_768]:https://huggingface.co/uer/chinese_roberta_L-2_H-768
[4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128
[4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256
[4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512
[4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768
[6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128
[6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256
[6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512
[6_768]:https://huggingface.co/uer/chinese_roberta_L-6_H-768
[8_128]:https://huggingface.co/uer/chinese_roberta_L-8_H-128
[8_256]:https://huggingface.co/uer/chinese_roberta_L-8_H-256
[8_512]:https://huggingface.co/uer/chinese_roberta_L-8_H-512
[8_768]:https://huggingface.co/uer/chinese_roberta_L-8_H-768
[10_128]:https://huggingface.co/uer/chinese_roberta_L-10_H-128
[10_256]:https://huggingface.co/uer/chinese_roberta_L-10_H-256
[10_512]:https://huggingface.co/uer/chinese_roberta_L-10_H-512
[10_768]:https://huggingface.co/uer/chinese_roberta_L-10_H-768
[12_128]:https://huggingface.co/uer/chinese_roberta_L-12_H-128
[12_256]:https://huggingface.co/uer/chinese_roberta_L-12_H-256
[12_512]:https://huggingface.co/uer/chinese_roberta_L-12_H-512
[12_768]:https://huggingface.co/uer/chinese_roberta_L-12_H-768 |
unicamp-dl/ptt5-base-en-pt-msmarco-100k-v2 | 8e964d26326f1f402cfcbd55967d6039b54433a6 | 2022-01-06T21:32:20.000Z | [
"pytorch",
"t5",
"text2text-generation",
"pt",
"dataset:msmarco",
"arxiv:2108.13897",
"transformers",
"msmarco",
"tensorflow",
"pt-br",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | unicamp-dl | null | unicamp-dl/ptt5-base-en-pt-msmarco-100k-v2 | 5 | null | transformers | 16,840 | ---
language: pt
license: mit
tags:
- msmarco
- t5
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- msmarco
widget:
- text: "Texto de exemplo em português"
inference: false
---
# PTT5-base Reranker finetuned on both English and Portuguese MS MARCO
## Introduction
ptt5-base-msmarco-en-pt-100k-v2 is a T5-based model pretrained in the BrWac corpus, fine-tuned on both English and Portuguese translated version of MS MARCO passage dataset. In the v2 version, the Portuguese dataset was translated using Google Translate. This model was finetuned for 100k steps.
Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'unicamp-dl/ptt5-base-msmarco-en-pt-100k-v2'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use ptt5-base-msmarco-en-pt-100k-v2, please cite:
@misc{bonifacio2021mmarco,
title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
eprint={2108.13897},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
|
usami/distilbert-base-uncased-finetuned-cola | 96b363b112827d7db4f1dac1c9d6505fdd7f8d43 | 2021-11-17T06:31:12.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | usami | null | usami/distilbert-base-uncased-finetuned-cola | 5 | null | transformers | 16,841 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5491920151313351
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7767
- Matthews Correlation: 0.5492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5244 | 1.0 | 535 | 0.5349 | 0.4240 |
| 0.3471 | 2.0 | 1070 | 0.5087 | 0.5079 |
| 0.235 | 3.0 | 1605 | 0.6847 | 0.5106 |
| 0.1718 | 4.0 | 2140 | 0.7767 | 0.5492 |
| 0.1271 | 5.0 | 2675 | 0.8580 | 0.5469 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
valhalla/s2t_librispeech_small | 16a3ff225b5484c6ed21aec983ccecfff8e55e71 | 2021-02-26T14:24:09.000Z | [
"pytorch",
"speech_to_text_transformer",
"text2text-generation",
"en",
"dataset:librispeech_asr",
"transformers",
"audio",
"automatic-speech-recognition",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | valhalla | null | valhalla/s2t_librispeech_small | 5 | null | transformers | 16,842 | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
license: apache-2.0
---
TODO: [To be filled]
## Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) *"clean"* and *"other"* test dataset.
```python
from datasets import load_dataset
from transformers import Speech2TextTransformerForConditionalGeneration, Speech2TextTransformerTokenizer
import soundfile as sf
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
model = Speech2TextTransformerForConditionalGeneration.from_pretrained("valhalla/s2t_librispeech_small").to("cuda")
tokenizer = Speech2TextTransformerTokenizer.from_pretrained("valhalla/s2t_librispeech_small", do_upper_case=True)
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
features = tokenizer(batch["speech"], sample_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)
batch["transcription"] = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 4.3 | 9.0 | |
vasudevgupta/bigbird-roberta-base | ea4fe59828a801165edbfaf02baf2be7c8c72156 | 2021-07-26T17:30:39.000Z | [
"pytorch",
"big_bird",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | vasudevgupta | null | vasudevgupta/bigbird-roberta-base | 5 | null | transformers | 16,843 | Moved here: https://huggingface.co/google/bigbird-roberta-base |
vishnun/distilgpt2-finetuned-tamilmixsentiment | a1f580c9f8146596fc709f3b34caaf876f1dee3e | 2021-08-14T05:09:58.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | text-generation | false | vishnun | null | vishnun/distilgpt2-finetuned-tamilmixsentiment | 5 | null | transformers | 16,844 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: distilgpt2-finetuned-tamilmixsentiment
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-tamilmixsentiment
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4572
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.6438 | 1.0 | 907 | 4.8026 |
| 4.774 | 2.0 | 1814 | 4.5953 |
| 4.5745 | 3.0 | 2721 | 4.5070 |
| 4.4677 | 4.0 | 3628 | 4.4688 |
| 4.4294 | 5.0 | 4535 | 4.4572 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
vittoriomaggio/bert-base-msmarco-fiqa-transfer | 1a916d230c67f5745cfd3bbb5f49d47932d2ba34 | 2022-01-23T18:13:39.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | vittoriomaggio | null | vittoriomaggio/bert-base-msmarco-fiqa-transfer | 5 | null | transformers | 16,845 | Entry not found |
vocab-transformers/dense_encoder-msmarco-distilbert-word2vec256k-MLM_445k_emb_updated | af36be07fe7d08c9efb4ad526e7817f20b32a7c9 | 2022-02-21T20:09:42.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | vocab-transformers | null | vocab-transformers/dense_encoder-msmarco-distilbert-word2vec256k-MLM_445k_emb_updated | 5 | null | sentence-transformers | 16,846 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# dense_encoder-msmarco-distilbert-word2vec256k-MLM_445k
This model is based on [vocab-transformers/msmarco-distilbert-word2vec256k-MLM_445k](https://huggingface.co/vocab-transformers/msmarco-distilbert-word2vec256k-MLM_445k) with a 256k sized vocabulary initialized with word2vec that has been trained with MLM for 445k steps. **Note: Token embeddings where updated!**
It has been trained on MS MARCO using [MarginMSELoss](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/ms_marco/train_bi-encoder_margin-mse.py). See the train_script.py in this repository. **Note: Token embeddings where updated!**
Performance:
- MS MARCO dev: 34.94 (MRR@10)
- TREC-DL 2019: 66.72 (nDCG@10)
- TREC-DL 2020: 69.14 (nDCG@10)
## Usage (Sentence-Transformers)
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7858 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MarginMSELoss.MarginMSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 250, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
vocab-transformers/dense_encoder-msmarco-distilbert-word2vec256k-MLM_785k_emb_updated | 26dfc4dd089cc8d683ee0483d1c129d523394863 | 2022-02-22T12:09:18.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | vocab-transformers | null | vocab-transformers/dense_encoder-msmarco-distilbert-word2vec256k-MLM_785k_emb_updated | 5 | null | sentence-transformers | 16,847 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# dense_encoder-msmarco-distilbert-word2vec256k-MLM_785k_emb_updated
**Note: Token embeddings where updated!**
This model is based on [vocab-transformers/msmarco-distilbert-word2vec256k-MLM_785k_emb_updated](https://huggingface.co/vocab-transformers/msmarco-distilbert-word2vec256k-MLM_785k_emb_updated) with a 256k sized vocabulary initialized with word2vec that has been trained with MLM for 785k.
It has been trained on MS MARCO using [MarginMSELoss](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/ms_marco/train_bi-encoder_margin-mse.py). See the train_script.py in this repository.
Performance:
- MS MARCO dev: 35.20 (MRR@10)
- TREC-DL 2019: 67.61 (nDCG@10)
- TREC-DL 2020: 69.62 (nDCG@10)
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7858 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MarginMSELoss.MarginMSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 250, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
vuiseng9/bert-base-uncased-mnli | 8e0524ef179e15e7e6e0aa57c3646ab5d7ca2897 | 2021-10-06T02:40:23.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | vuiseng9 | null | vuiseng9/bert-base-uncased-mnli | 5 | null | transformers | 16,848 | This model is developed with transformers v4.10.3.
# Train
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=bert-based-uncased-mnli
WORKDIR=transformers/examples/pytorch/text-classification
cd $WORKDIR
nohup python run_glue.py \
--model_name_or_path bert-base-uncased \
--task_name mnli \
--do_eval \
--do_train \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--max_seq_length 128 \
--num_train_epochs 3 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
# Eval
```bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-based-uncased-mnli
WORKDIR=transformers/examples/pytorch/text-classification
cd $WORKDIR
nohup python run_glue.py \
--model_name_or_path vuiseng9/bert-base-uncased-mnli \
--task_name mnli \
--do_eval \
--per_device_eval_batch_size 16 \
--max_seq_length 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
|
vuiseng9/bert-mnli | de654c98884cb44b3c941313f8b997ead820e638 | 2022-01-26T06:48:02.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | vuiseng9 | null | vuiseng9/bert-mnli | 5 | null | transformers | 16,849 | This model is developed with transformers v4.9.1.
```
m = 0.8444
eval_samples = 9815
mm = 0.8495
eval_samples = 9832
```
# Train
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=bert-mnli
NEPOCH=3
WORKDIR=transformers/examples/pytorch/text-classification
cd $WORKDIR
python run_glue.py \
--model_name_or_path bert-base-uncased \
--task_name mnli \
--max_seq_length 128 \
--do_train \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs $NEPOCH \
--logging_steps 1 \
--evaluation_strategy steps \
--save_steps 3000 \
--do_eval \
--per_device_eval_batch_size 128 \
--eval_steps 250 \
--output_dir $OUTDIR
--overwrite_output_dir
```
# Eval
```bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-mnli
WORKDIR=transformers/examples/pytorch/text-classification
cd $WORKDIR
nohup python run_glue.py \
--model_name_or_path vuiseng9/bert-mnli \
--task_name mnli \
--do_eval \
--per_device_eval_batch_size 128 \
--max_seq_length 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
|
w11wo/javanese-distilbert-small-imdb-classifier | 7b2437c375c338ec4b063344f9e0d68173314694 | 2022-02-14T16:18:57.000Z | [
"pytorch",
"tf",
"distilbert",
"text-classification",
"jv",
"dataset:w11wo/imdb-javanese",
"arxiv:1910.01108",
"transformers",
"javanese-distilbert-small-imdb-classifier",
"license:mit"
] | text-classification | false | w11wo | null | w11wo/javanese-distilbert-small-imdb-classifier | 5 | null | transformers | 16,850 | ---
language: jv
tags:
- javanese-distilbert-small-imdb-classifier
license: mit
datasets:
- w11wo/imdb-javanese
widget:
- text: "Aku babar pisan ora nikmati film iki."
---
## Javanese DistilBERT Small IMDB Classifier
Javanese DistilBERT Small IMDB Classifier is a movie-classification model based on the [DistilBERT model](https://arxiv.org/abs/1910.01108). It was trained on Javanese IMDB movie reviews.
The model was originally [`w11wo/javanese-distilbert-small-imdb`](https://huggingface.co/w11wo/javanese-distilbert-small-imdb) which is then fine-tuned on the [`w11wo/imdb-javanese`](https://huggingface.co/datasets/w11wo/imdb-javanese) dataset consisting of Javanese IMDB movie reviews. It achieved an accuracy of 76.04% on the validation dataset. Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb) written by [Sylvain Gugger](https://github.com/sgugger).
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|---------------------------------------------|---------|------------------|---------------------------------|
| `javanese-distilbert-small-imdb-classifier` | 66M | DistilBERT Small | Javanese IMDB (47.5 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | accuracy | total time |
|------------|------------|------------|------------|
| 0.131 | 1.113 | 0.760 | 1:26:4 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-distilbert-small-imdb-classifier"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Film sing apik banget!")
```
## Disclaimer
Do consider the biases which came from the IMDB review that may be carried over into the results of this model.
## Author
Javanese DistilBERT Small IMDB Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Citation
If you use any of our models in your research, please cite:
```bib
@inproceedings{wongso2021causal,
title={Causal and Masked Language Modeling of Javanese Language using Transformer-based Architectures},
author={Wongso, Wilson and Setiawan, David Samuel and Suhartono, Derwin},
booktitle={2021 International Conference on Advanced Computer Science and Information Systems (ICACSIS)},
pages={1--7},
year={2021},
organization={IEEE}
}
```
|
w11wo/wav2vec2-xls-r-300m-korean-lm | 3e990d7c806bfd852a31ea9f165923a2d8207f9e | 2022-03-23T18:26:45.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ko",
"dataset:kresnik/zeroth_korean",
"arxiv:2111.09296",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | w11wo | null | w11wo/wav2vec2-xls-r-300m-korean-lm | 5 | null | transformers | 16,851 | ---
language: ko
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- kresnik/zeroth_korean
model-index:
- name: Wav2Vec2 XLS-R 300M Korean LM
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Zeroth Korean
type: kresnik/zeroth_korean
args: clean
metrics:
- name: Test WER
type: wer
value: 30.94
- name: Test CER
type: cer
value: 7.97
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ko
metrics:
- name: Test WER
type: wer
value: 68.34
- name: Test CER
type: cer
value: 37.08
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ko
metrics:
- name: Test WER
type: wer
value: 66.47
---
# Wav2Vec2 XLS-R 300M Korean LM
Wav2Vec2 XLS-R 300M Korean LM is an automatic speech recognition model based on the [XLS-R](https://arxiv.org/abs/2111.09296) architecture. This model is a fine-tuned version of [Wav2Vec2-XLS-R-300M](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [Zeroth Korean](https://huggingface.co/datasets/kresnik/zeroth_korean) dataset. A 5-gram Language model, trained on the Korean subset of [Open Subtitles](https://huggingface.co/datasets/open_subtitles), was then subsequently added to this model.
This model was trained using HuggingFace's PyTorch framework and is part of the [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by HuggingFace. All training was done on a Tesla V100, sponsored by OVH.
All necessary scripts used for training could be found in the [Files and versions](https://huggingface.co/w11wo/wav2vec2-xls-r-300m-korean-lm/tree/main) tab, as well as the [Training metrics](https://huggingface.co/w11wo/wav2vec2-xls-r-300m-korean-lm/tensorboard) logged via Tensorboard.
As for the N-gram language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by HuggingFace.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ------------------------------- | ------- | ----- | ------------------------------- |
| `wav2vec2-xls-r-300m-korean-lm` | 300M | XLS-R | `Zeroth Korean` Dataset |
## Evaluation Results
The model achieves the following results on evaluation without a language model:
| Dataset | WER | CER |
| -------------------------------- | ------ | ------ |
| `Zeroth Korean` | 29.54% | 9.53% |
| `Robust Speech Event - Dev Data` | 76.26% | 38.67% |
With the addition of the language model, it achieves the following results:
| Dataset | WER | CER |
| -------------------------------- | ------ | ------ |
| `Zeroth Korean` | 30.94% | 7.97% |
| `Robust Speech Event - Dev Data` | 68.34% | 37.08% |
## Training procedure
The training process did not involve the addition of a language model. The following results were simply lifted from the original automatic speech recognition [model training](https://huggingface.co/w11wo/wav2vec2-xls-r-300m-korean).
### Training hyperparameters
The following hyperparameters were used during training:
- `learning_rate`: 7.5e-05
- `train_batch_size`: 8
- `eval_batch_size`: 8
- `seed`: 42
- `gradient_accumulation_steps`: 4
- `total_train_batch_size`: 32
- `optimizer`: Adam with `betas=(0.9, 0.999)` and `epsilon=1e-08`
- `lr_scheduler_type`: linear
- `lr_scheduler_warmup_steps`: 2000
- `num_epochs`: 50.0
- `mixed_precision_training`: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
| :-----------: | :---: | :---: | :-------------: | :----: | :----: |
| 19.7138 | 0.72 | 500 | 19.6427 | 1.0 | 1.0 |
| 4.8039 | 1.44 | 1000 | 4.7842 | 1.0 | 1.0 |
| 4.5619 | 2.16 | 1500 | 4.5608 | 0.9992 | 0.9598 |
| 4.254 | 2.88 | 2000 | 4.2729 | 0.9955 | 0.9063 |
| 4.1905 | 3.6 | 2500 | 4.2257 | 0.9903 | 0.8758 |
| 4.0683 | 4.32 | 3000 | 3.9294 | 0.9937 | 0.7911 |
| 3.486 | 5.04 | 3500 | 2.7045 | 1.0012 | 0.5934 |
| 2.946 | 5.75 | 4000 | 1.9691 | 0.9425 | 0.4634 |
| 2.634 | 6.47 | 4500 | 1.5212 | 0.8807 | 0.3850 |
| 2.4066 | 7.19 | 5000 | 1.2551 | 0.8177 | 0.3601 |
| 2.2651 | 7.91 | 5500 | 1.0423 | 0.7650 | 0.3039 |
| 2.1828 | 8.63 | 6000 | 0.9599 | 0.7273 | 0.3106 |
| 2.1023 | 9.35 | 6500 | 0.9482 | 0.7161 | 0.3063 |
| 2.0536 | 10.07 | 7000 | 0.8242 | 0.6767 | 0.2860 |
| 1.9803 | 10.79 | 7500 | 0.7643 | 0.6563 | 0.2637 |
| 1.9468 | 11.51 | 8000 | 0.7319 | 0.6441 | 0.2505 |
| 1.9178 | 12.23 | 8500 | 0.6937 | 0.6320 | 0.2489 |
| 1.8515 | 12.95 | 9000 | 0.6443 | 0.6053 | 0.2196 |
| 1.8083 | 13.67 | 9500 | 0.6286 | 0.6122 | 0.2148 |
| 1.819 | 14.39 | 10000 | 0.6015 | 0.5986 | 0.2074 |
| 1.7684 | 15.11 | 10500 | 0.5682 | 0.5741 | 0.1982 |
| 1.7195 | 15.83 | 11000 | 0.5385 | 0.5592 | 0.2007 |
| 1.7044 | 16.55 | 11500 | 0.5362 | 0.5524 | 0.2097 |
| 1.6879 | 17.27 | 12000 | 0.5119 | 0.5489 | 0.2083 |
| 1.656 | 17.98 | 12500 | 0.4990 | 0.5362 | 0.1968 |
| 1.6122 | 18.7 | 13000 | 0.4561 | 0.5092 | 0.1900 |
| 1.5919 | 19.42 | 13500 | 0.4778 | 0.5225 | 0.1975 |
| 1.5896 | 20.14 | 14000 | 0.4563 | 0.5098 | 0.1859 |
| 1.5589 | 20.86 | 14500 | 0.4362 | 0.4940 | 0.1725 |
| 1.5353 | 21.58 | 15000 | 0.4140 | 0.4826 | 0.1580 |
| 1.5441 | 22.3 | 15500 | 0.4031 | 0.4742 | 0.1550 |
| 1.5116 | 23.02 | 16000 | 0.3916 | 0.4748 | 0.1545 |
| 1.4731 | 23.74 | 16500 | 0.3841 | 0.4810 | 0.1542 |
| 1.4647 | 24.46 | 17000 | 0.3752 | 0.4524 | 0.1475 |
| 1.4328 | 25.18 | 17500 | 0.3587 | 0.4476 | 0.1461 |
| 1.4129 | 25.9 | 18000 | 0.3429 | 0.4242 | 0.1366 |
| 1.4062 | 26.62 | 18500 | 0.3450 | 0.4251 | 0.1355 |
| 1.3928 | 27.34 | 19000 | 0.3297 | 0.4145 | 0.1322 |
| 1.3906 | 28.06 | 19500 | 0.3210 | 0.4185 | 0.1336 |
| 1.358 | 28.78 | 20000 | 0.3131 | 0.3970 | 0.1275 |
| 1.3445 | 29.5 | 20500 | 0.3069 | 0.3920 | 0.1276 |
| 1.3159 | 30.22 | 21000 | 0.3035 | 0.3961 | 0.1255 |
| 1.3044 | 30.93 | 21500 | 0.2952 | 0.3854 | 0.1242 |
| 1.3034 | 31.65 | 22000 | 0.2966 | 0.3772 | 0.1227 |
| 1.2963 | 32.37 | 22500 | 0.2844 | 0.3706 | 0.1208 |
| 1.2765 | 33.09 | 23000 | 0.2841 | 0.3567 | 0.1173 |
| 1.2438 | 33.81 | 23500 | 0.2734 | 0.3552 | 0.1137 |
| 1.2487 | 34.53 | 24000 | 0.2703 | 0.3502 | 0.1118 |
| 1.2249 | 35.25 | 24500 | 0.2650 | 0.3484 | 0.1142 |
| 1.2229 | 35.97 | 25000 | 0.2584 | 0.3374 | 0.1097 |
| 1.2374 | 36.69 | 25500 | 0.2568 | 0.3337 | 0.1095 |
| 1.2153 | 37.41 | 26000 | 0.2494 | 0.3327 | 0.1071 |
| 1.1925 | 38.13 | 26500 | 0.2518 | 0.3366 | 0.1077 |
| 1.1908 | 38.85 | 27000 | 0.2437 | 0.3272 | 0.1057 |
| 1.1858 | 39.57 | 27500 | 0.2396 | 0.3265 | 0.1044 |
| 1.1808 | 40.29 | 28000 | 0.2373 | 0.3156 | 0.1028 |
| 1.1842 | 41.01 | 28500 | 0.2356 | 0.3152 | 0.1026 |
| 1.1668 | 41.73 | 29000 | 0.2319 | 0.3188 | 0.1025 |
| 1.1448 | 42.45 | 29500 | 0.2293 | 0.3099 | 0.0995 |
| 1.1327 | 43.17 | 30000 | 0.2265 | 0.3047 | 0.0979 |
| 1.1307 | 43.88 | 30500 | 0.2222 | 0.3078 | 0.0989 |
| 1.1419 | 44.6 | 31000 | 0.2215 | 0.3038 | 0.0981 |
| 1.1231 | 45.32 | 31500 | 0.2193 | 0.3013 | 0.0972 |
| 1.139 | 46.04 | 32000 | 0.2162 | 0.3007 | 0.0968 |
| 1.1114 | 46.76 | 32500 | 0.2122 | 0.2982 | 0.0960 |
| 1.111 | 47.48 | 33000 | 0.2125 | 0.2946 | 0.0948 |
| 1.0982 | 48.2 | 33500 | 0.2099 | 0.2957 | 0.0953 |
| 1.109 | 48.92 | 34000 | 0.2092 | 0.2955 | 0.0955 |
| 1.0905 | 49.64 | 34500 | 0.2088 | 0.2954 | 0.0953 |
## Disclaimer
Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
## Authors
Wav2Vec2 XLS-R 300M Korean LM was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on OVH Cloud.
## Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.10.3
|
yacov/yacov-athena-DistilBertSC | 1822930af878574fde2ced3e12009b6f69299322 | 2021-03-12T19:40:04.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | yacov | null | yacov/yacov-athena-DistilBertSC | 5 | null | transformers | 16,852 | hello
|
yahya1994/DialoGPT-small-Gintama-Gintoki | 44ee101d47848923b7fa90a23d9d76faf2f12419 | 2021-09-03T17:17:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | yahya1994 | null | yahya1994/DialoGPT-small-Gintama-Gintoki | 5 | null | transformers | 16,853 | ---
tags:
- conversational
---
# Gintoki dialog |
yaoyinnan/roberta-fakeddit | 0aa357d4815369221eb8d79b221c911da87c387c | 2021-05-20T23:15:25.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | yaoyinnan | null | yaoyinnan/roberta-fakeddit | 5 | null | transformers | 16,854 | Entry not found |
yaswanth/xls-r-300m-yaswanth-hindi2 | f9cf50c312203794fd430f211b977238bb6c595e | 2022-03-23T18:28:10.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | yaswanth | null | yaswanth/xls-r-300m-yaswanth-hindi2 | 5 | null | transformers | 16,855 | ---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: xls-r-300m-yaswanth-hindi2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-yaswanth-hindi2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7163
- Wer: 0.6951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.986 | 4.46 | 500 | 2.0194 | 1.1857 |
| 0.9232 | 8.93 | 1000 | 1.2665 | 0.8435 |
| 0.5094 | 13.39 | 1500 | 1.2473 | 0.7893 |
| 0.3618 | 17.86 | 2000 | 1.3675 | 0.7789 |
| 0.2914 | 22.32 | 2500 | 1.3725 | 0.7914 |
| 0.2462 | 26.79 | 3000 | 1.4567 | 0.7795 |
| 0.228 | 31.25 | 3500 | 1.6179 | 0.7872 |
| 0.1995 | 35.71 | 4000 | 1.4932 | 0.7555 |
| 0.1878 | 40.18 | 4500 | 1.5352 | 0.7480 |
| 0.165 | 44.64 | 5000 | 1.5238 | 0.7440 |
| 0.1514 | 49.11 | 5500 | 1.5842 | 0.7498 |
| 0.1416 | 53.57 | 6000 | 1.6662 | 0.7524 |
| 0.1351 | 58.04 | 6500 | 1.6280 | 0.7356 |
| 0.1196 | 62.5 | 7000 | 1.6329 | 0.7250 |
| 0.1109 | 66.96 | 7500 | 1.6435 | 0.7302 |
| 0.1008 | 71.43 | 8000 | 1.7058 | 0.7170 |
| 0.0907 | 75.89 | 8500 | 1.6880 | 0.7387 |
| 0.0816 | 80.36 | 9000 | 1.6957 | 0.7031 |
| 0.0743 | 84.82 | 9500 | 1.7547 | 0.7222 |
| 0.0694 | 89.29 | 10000 | 1.6974 | 0.7117 |
| 0.0612 | 93.75 | 10500 | 1.7251 | 0.7020 |
| 0.0577 | 98.21 | 11000 | 1.7163 | 0.6951 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ychu4/distilbert-base-uncased-finetuned-cola | fac30f58fc0e456e32c9d03d4d9de2594e8b6dd3 | 2021-11-16T03:23:59.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ychu4 | null | ychu4/distilbert-base-uncased-finetuned-cola | 5 | null | transformers | 16,856 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.509687043672971
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7512
- Matthews Correlation: 0.5097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5237 | 1.0 | 535 | 0.5117 | 0.4469 |
| 0.3496 | 2.0 | 1070 | 0.5538 | 0.4965 |
| 0.2377 | 3.0 | 1605 | 0.6350 | 0.4963 |
| 0.1767 | 4.0 | 2140 | 0.7512 | 0.5097 |
| 0.1383 | 5.0 | 2675 | 0.8647 | 0.5056 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.8.1+cu102
- Datasets 1.15.1
- Tokenizers 0.10.1
|
yihanlin/scibert_scivocab_uncased | d57ea87ba2184b5c1b17580ebed0e05295536b81 | 2021-05-20T09:30:31.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
] | null | false | yihanlin | null | yihanlin/scibert_scivocab_uncased | 5 | null | transformers | 16,857 | Entry not found |
ykliu1892/translation-en-pt-t5-Duolingo-Subtitles | 36ad6dd3e0b179acfb105c6d375bc24912d3de8f | 2021-12-13T06:06:40.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | ykliu1892 | null | ykliu1892/translation-en-pt-t5-Duolingo-Subtitles | 5 | null | transformers | 16,858 | ---
tags:
- generated_from_trainer
model-index:
- name: translation-en-pt-t5-Duolingo-Subtitles
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translation-en-pt-t5-Duolingo-Subtitles
This model is a fine-tuned version of [unicamp-dl/translation-en-pt-t5](https://huggingface.co/unicamp-dl/translation-en-pt-t5) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7469
- eval_bleu: 39.9403
- eval_gen_len: 8.98
- eval_runtime: 997.6641
- eval_samples_per_second: 150.351
- eval_steps_per_second: 4.699
- epoch: 0.49
- step: 56000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
yoshitomo-matsubara/bert-base-uncased-rte_from_bert-large-uncased-rte | fd2e5db4a9d2758ac153ebb3fa7cf20570a6b574 | 2021-06-03T05:08:12.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:rte",
"transformers",
"rte",
"glue",
"kd",
"torchdistill",
"license:apache-2.0"
] | text-classification | false | yoshitomo-matsubara | null | yoshitomo-matsubara/bert-base-uncased-rte_from_bert-large-uncased-rte | 5 | null | transformers | 16,859 | ---
language: en
tags:
- bert
- rte
- glue
- kd
- torchdistill
license: apache-2.0
datasets:
- rte
metrics:
- accuracy
---
`bert-base-uncased` fine-tuned on RTE dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation.
The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/rte/kd/bert_base_uncased_from_bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
|
yoshitomo-matsubara/bert-base-uncased-wnli | fed1047822a7f3d31e0d61525d557c762b017aa4 | 2021-05-29T22:00:50.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:wnli",
"transformers",
"wnli",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | false | yoshitomo-matsubara | null | yoshitomo-matsubara/bert-base-uncased-wnli | 5 | null | transformers | 16,860 | ---
language: en
tags:
- bert
- wnli
- glue
- torchdistill
license: apache-2.0
datasets:
- wnli
metrics:
- accuracy
---
`bert-base-uncased` fine-tuned on WNLI dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/wnli/ce/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
|
yoshitomo-matsubara/bert-large-uncased-mrpc | 29cf0ac4336930584a4329cc71bdc864c77dd9f1 | 2021-05-29T21:32:51.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:mrpc",
"transformers",
"mrpc",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | false | yoshitomo-matsubara | null | yoshitomo-matsubara/bert-large-uncased-mrpc | 5 | null | transformers | 16,861 | ---
language: en
tags:
- bert
- mrpc
- glue
- torchdistill
license: apache-2.0
datasets:
- mrpc
metrics:
- f1
- accuracy
---
`bert-large-uncased` fine-tuned on MRPC dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/mrpc/ce/bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
|
younes9/AI-DAY-distilbert-base-uncased-finetuned-cola | aae753109d56b4c112d11c19d4d04670b02a4bd2 | 2022-01-24T18:13:20.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | younes9 | null | younes9/AI-DAY-distilbert-base-uncased-finetuned-cola | 5 | null | transformers | 16,862 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: AI-DAY-distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5382139717003264
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AI-DAY-distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7236
- Matthews Correlation: 0.5382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5308 | 1.0 | 535 | 0.5065 | 0.4296 |
| 0.3565 | 2.0 | 1070 | 0.5109 | 0.4940 |
| 0.2399 | 3.0 | 1605 | 0.6056 | 0.5094 |
| 0.1775 | 4.0 | 2140 | 0.7236 | 0.5382 |
| 0.1242 | 5.0 | 2675 | 0.8659 | 0.5347 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
yseop/FNP_T5_D2T_simple | b6637128d717e6862410e781bdaccfdde04e3c10 | 2021-09-06T20:54:48.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | yseop | null | yseop/FNP_T5_D2T_simple | 5 | null | transformers | 16,863 | # T5-base data to text model specialized for Finance NLG
__simple version__
This model was trained on a limited number of indicators, values and dates
----
## Usage (HuggingFace Transformers)
#### Call the model
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("yseop/FNP_T5_D2T_simple")
model = AutoModelForSeq2SeqLM.from_pretrained("yseop/FNP_T5_D2T_simple")
text = ["Group profit | valIs | $ 10 && € $10 | dTime | in 2019"]
```
#### Choose a generation method
```python
input_ids = tokenizer.encode(": {}".format(text), return_tensors="pt")
p=0.72
k=40
outputs = model.generate(input_ids,
do_sample=True,
top_p=p,
top_k=k,
early_stopping=True)
print(tokenizer.decode(outputs[0]))
```
```python
input_ids = tokenizer.encode(": {}".format(text), return_tensors="pt")
outputs = model.generate(input_ids,
max_length=200,
num_beams=2, repetition_penalty=2.5,
top_k=50, top_p=0.98,
length_penalty=1.0,
early_stopping=True)
print(tokenizer.decode(outputs[0]))
```
**Created by:** [Yseop](https://www.yseop.com/) | Pioneer in Natural Language Generation (NLG) technology. Scaling human expertise through Natural Language Generation. |
yuchenlin/BART0-base | ae9af6a586f26b704e5d362c04709ef89a8946ed | 2021-12-11T05:07:38.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:bigscience/P3",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | yuchenlin | null | yuchenlin/BART0-base | 5 | null | transformers | 16,864 | ---
datasets:
- bigscience/P3
language: en
license: apache-2.0
widget:
- text: "A is the son's of B's uncle. What is the family relationship between A and B?"
- text: "Reorder the words in this sentence: justin and name bieber years is my am I 27 old."
- text: "Task: copy but say the opposite.\n
PSG won its match against Barca."
- text: "Is this review positive or negative? Review: Best cast iron skillet you will every buy."
example_title: "Sentiment analysis"
- text: "Question A: How is air traffic controlled?
\nQuestion B: How do you become an air traffic controller?\nPick one: these questions are duplicates or not duplicates."
- text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had foreign affairs experience as a former First Lady.
\nIn the previous sentence, decide who 'her' is referring to."
example_title: "Coreference resolution"
- text: "Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.\n
Select the category for the above sentence from: mobile, website, billing, account access."
- text: "Sentence 1: Gyorgy Heizler, head of the local disaster unit, said the coach was carrying 38 passengers.\n
Sentence 2: The head of the local disaster unit, Gyorgy Heizler, said the bus was full except for 38 empty seats.\n\n
Do sentences 1 and 2 have the same meaning?"
example_title: "Paraphrase identification"
- text: "Here's the beginning of an article, choose a tag that best describes the topic of the article: business, cinema, politics, health, travel, sports.\n\n
The best and worst fo 007 as 'No time to die' marks Daniel Craig's exit.\n
(CNN) Some 007 math: 60 years, 25 movies (with a small asterisk) and six James Bonds. For a Cold War creation, Ian Fleming's suave spy has certainly gotten around, but despite different guises in the tuxedo and occasional scuba gear, when it comes to Bond ratings, there really shouldn't be much argument about who wore it best."
- text: "Max: Know any good websites to buy clothes from?\n
Payton: Sure :) LINK 1, LINK 2, LINK 3\n
Max: That's a lot of them!\n
Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.\n
Max: I'll check them out. Thanks.\n\n
Who or what are Payton and Max referring to when they say 'them'?"
- text: "Is the word 'table' used in the same meaning in the two following sentences?\n\n
Sentence A: you can leave the books on the table over there.\n
Sentence B: the tables in this book are very hard to read."
- text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.\n
The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.\n\n
Which book is the leftmost book?"
example_title: "Logic puzzles"
- text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night.\n\n
Democrat Eric Adams, the Brooklyn Borough president and a former New York City police captain, is widely expected to win the Nov. 2 election against Republican Curtis Sliwa, the founder of the 1970s-era Guardian Angels anti-crime patril.\n\n
Who are the men running for mayor?"
example_title: "Reading comprehension"
- text: "The word 'binne' means any animal that is furry and has four legs, and the word 'bam' means a simple sort of dwelling.\n\n
Which of the following best characterizes binne bams?\n
- Sentence 1: Binne bams are for pets.\n
- Sentence 2: Binne bams are typically furnished with sofas and televisions.\n
- Sentence 3: Binne bams are luxurious apartments.\n
- Sentence 4: Binne bams are places where people live."
---
TBA |
zer0sh0t/programmer_ai_v2 | 837ec1dd244029e5934db618d0910069ab2d7fb4 | 2021-07-10T12:30:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | zer0sh0t | null | zer0sh0t/programmer_ai_v2 | 5 | null | transformers | 16,865 | Entry not found |
zharry29/intent_snips_wh_id | 4e32f5a13df6b018c0ad7cf2784adc041c0bd7b7 | 2021-05-20T23:49:50.000Z | [
"pytorch",
"jax",
"roberta",
"multiple-choice",
"transformers"
] | multiple-choice | false | zharry29 | null | zharry29/intent_snips_wh_id | 5 | null | transformers | 16,866 | Entry not found |
zharry29/order_benchmark_xlnet | f1bb99b1f647d097f4f43c66758ae8a32e2e7430 | 2020-09-16T20:03:11.000Z | [
"pytorch",
"xlnet",
"multiple-choice",
"transformers"
] | multiple-choice | false | zharry29 | null | zharry29/order_benchmark_xlnet | 5 | null | transformers | 16,867 | Entry not found |
zharry29/step_benchmark_xlnet | 9d2ea97482482fb2774aa378a3b6a053ddb5a772 | 2020-09-16T19:57:55.000Z | [
"pytorch",
"xlnet",
"multiple-choice",
"transformers"
] | multiple-choice | false | zharry29 | null | zharry29/step_benchmark_xlnet | 5 | null | transformers | 16,868 | Entry not found |
zhihao/distilbert-base-uncased-finetuned-ner | 4835c303a22739e09d83b77c61315f869cefc983 | 2021-08-04T07:48:13.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | zhihao | null | zhihao/distilbert-base-uncased-finetuned-ner | 5 | null | transformers | 16,869 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.9840500738716699
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0615
- Precision: 0.9251
- Recall: 0.9363
- F1: 0.9307
- Accuracy: 0.9841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2473 | 1.0 | 878 | 0.0714 | 0.9154 | 0.9178 | 0.9166 | 0.9808 |
| 0.0522 | 2.0 | 1756 | 0.0620 | 0.9201 | 0.9348 | 0.9274 | 0.9832 |
| 0.031 | 3.0 | 2634 | 0.0615 | 0.9251 | 0.9363 | 0.9307 | 0.9841 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
ziqingyang/XLMRobertaBaseForPAWSX-en | 4c09a68a104f5342e8f48cc57bae399d9b397eb6 | 2021-12-16T09:49:44.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | ziqingyang | null | ziqingyang/XLMRobertaBaseForPAWSX-en | 5 | null | transformers | 16,870 | Entry not found |
wietsedv/xlm-roberta-base-ft-udpos28-cy | 87d9d07034b16a0ee4c500e9ed3623b212a4528e | 2022-02-25T09:58:13.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"cy",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-cy | 5 | null | transformers | 16,871 |
---
language:
- cy
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-cy
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 78.9
- type: accuracy
name: Dutch Test accuracy
value: 81.3
- type: accuracy
name: German Test accuracy
value: 78.3
- type: accuracy
name: Italian Test accuracy
value: 74.9
- type: accuracy
name: French Test accuracy
value: 77.1
- type: accuracy
name: Spanish Test accuracy
value: 81.0
- type: accuracy
name: Russian Test accuracy
value: 82.0
- type: accuracy
name: Swedish Test accuracy
value: 80.6
- type: accuracy
name: Norwegian Test accuracy
value: 76.4
- type: accuracy
name: Danish Test accuracy
value: 78.7
- type: accuracy
name: Low Saxon Test accuracy
value: 52.7
- type: accuracy
name: Akkadian Test accuracy
value: 42.4
- type: accuracy
name: Armenian Test accuracy
value: 73.7
- type: accuracy
name: Welsh Test accuracy
value: 94.9
- type: accuracy
name: Old East Slavic Test accuracy
value: 71.6
- type: accuracy
name: Albanian Test accuracy
value: 76.8
- type: accuracy
name: Slovenian Test accuracy
value: 67.6
- type: accuracy
name: Guajajara Test accuracy
value: 33.1
- type: accuracy
name: Kurmanji Test accuracy
value: 77.1
- type: accuracy
name: Turkish Test accuracy
value: 72.0
- type: accuracy
name: Finnish Test accuracy
value: 77.1
- type: accuracy
name: Indonesian Test accuracy
value: 75.0
- type: accuracy
name: Ukrainian Test accuracy
value: 80.9
- type: accuracy
name: Polish Test accuracy
value: 82.7
- type: accuracy
name: Portuguese Test accuracy
value: 80.1
- type: accuracy
name: Kazakh Test accuracy
value: 75.5
- type: accuracy
name: Latin Test accuracy
value: 73.7
- type: accuracy
name: Old French Test accuracy
value: 54.0
- type: accuracy
name: Buryat Test accuracy
value: 60.2
- type: accuracy
name: Kaapor Test accuracy
value: 21.2
- type: accuracy
name: Korean Test accuracy
value: 56.8
- type: accuracy
name: Estonian Test accuracy
value: 79.4
- type: accuracy
name: Croatian Test accuracy
value: 79.6
- type: accuracy
name: Gothic Test accuracy
value: 29.3
- type: accuracy
name: Swiss German Test accuracy
value: 48.3
- type: accuracy
name: Assyrian Test accuracy
value: 14.6
- type: accuracy
name: North Sami Test accuracy
value: 45.4
- type: accuracy
name: Naija Test accuracy
value: 35.7
- type: accuracy
name: Latvian Test accuracy
value: 78.4
- type: accuracy
name: Chinese Test accuracy
value: 39.9
- type: accuracy
name: Tagalog Test accuracy
value: 71.9
- type: accuracy
name: Bambara Test accuracy
value: 33.2
- type: accuracy
name: Lithuanian Test accuracy
value: 77.7
- type: accuracy
name: Galician Test accuracy
value: 79.0
- type: accuracy
name: Vietnamese Test accuracy
value: 55.2
- type: accuracy
name: Greek Test accuracy
value: 79.5
- type: accuracy
name: Catalan Test accuracy
value: 78.1
- type: accuracy
name: Czech Test accuracy
value: 80.7
- type: accuracy
name: Erzya Test accuracy
value: 48.3
- type: accuracy
name: Bhojpuri Test accuracy
value: 55.0
- type: accuracy
name: Thai Test accuracy
value: 53.2
- type: accuracy
name: Marathi Test accuracy
value: 78.5
- type: accuracy
name: Basque Test accuracy
value: 69.5
- type: accuracy
name: Slovak Test accuracy
value: 82.6
- type: accuracy
name: Kiche Test accuracy
value: 41.2
- type: accuracy
name: Yoruba Test accuracy
value: 33.9
- type: accuracy
name: Warlpiri Test accuracy
value: 36.8
- type: accuracy
name: Tamil Test accuracy
value: 75.5
- type: accuracy
name: Maltese Test accuracy
value: 36.4
- type: accuracy
name: Ancient Greek Test accuracy
value: 55.4
- type: accuracy
name: Icelandic Test accuracy
value: 73.8
- type: accuracy
name: Mbya Guarani Test accuracy
value: 33.4
- type: accuracy
name: Urdu Test accuracy
value: 64.6
- type: accuracy
name: Romanian Test accuracy
value: 76.5
- type: accuracy
name: Persian Test accuracy
value: 78.7
- type: accuracy
name: Apurina Test accuracy
value: 48.4
- type: accuracy
name: Japanese Test accuracy
value: 28.6
- type: accuracy
name: Hungarian Test accuracy
value: 79.9
- type: accuracy
name: Hindi Test accuracy
value: 70.9
- type: accuracy
name: Classical Chinese Test accuracy
value: 20.5
- type: accuracy
name: Komi Permyak Test accuracy
value: 53.0
- type: accuracy
name: Faroese Test accuracy
value: 73.1
- type: accuracy
name: Sanskrit Test accuracy
value: 38.0
- type: accuracy
name: Livvi Test accuracy
value: 65.3
- type: accuracy
name: Arabic Test accuracy
value: 85.9
- type: accuracy
name: Wolof Test accuracy
value: 43.4
- type: accuracy
name: Bulgarian Test accuracy
value: 82.8
- type: accuracy
name: Akuntsu Test accuracy
value: 36.0
- type: accuracy
name: Makurap Test accuracy
value: 24.7
- type: accuracy
name: Kangri Test accuracy
value: 47.2
- type: accuracy
name: Breton Test accuracy
value: 61.8
- type: accuracy
name: Telugu Test accuracy
value: 74.6
- type: accuracy
name: Cantonese Test accuracy
value: 40.7
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 50.3
- type: accuracy
name: Karelian Test accuracy
value: 70.6
- type: accuracy
name: Upper Sorbian Test accuracy
value: 74.1
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 70.1
- type: accuracy
name: Komi Zyrian Test accuracy
value: 44.7
- type: accuracy
name: Irish Test accuracy
value: 69.5
- type: accuracy
name: Nayini Test accuracy
value: 53.8
- type: accuracy
name: Munduruku Test accuracy
value: 28.1
- type: accuracy
name: Manx Test accuracy
value: 47.4
- type: accuracy
name: Skolt Sami Test accuracy
value: 42.0
- type: accuracy
name: Afrikaans Test accuracy
value: 74.7
- type: accuracy
name: Old Turkish Test accuracy
value: 38.0
- type: accuracy
name: Tupinamba Test accuracy
value: 37.4
- type: accuracy
name: Belarusian Test accuracy
value: 84.5
- type: accuracy
name: Serbian Test accuracy
value: 80.8
- type: accuracy
name: Moksha Test accuracy
value: 47.7
- type: accuracy
name: Western Armenian Test accuracy
value: 68.7
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 67.4
- type: accuracy
name: Khunsari Test accuracy
value: 50.0
- type: accuracy
name: Hebrew Test accuracy
value: 86.5
- type: accuracy
name: Uyghur Test accuracy
value: 68.9
- type: accuracy
name: Chukchi Test accuracy
value: 36.8
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Welsh
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-cy")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-cy")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-ja | 246e47c48341f5e3429d2eb0628785a4f32e1652 | 2022-02-25T09:58:54.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"ja",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-ja | 5 | null | transformers | 16,872 |
---
language:
- ja
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-ja
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 47.7
- type: accuracy
name: Dutch Test accuracy
value: 49.8
- type: accuracy
name: German Test accuracy
value: 55.7
- type: accuracy
name: Italian Test accuracy
value: 52.0
- type: accuracy
name: French Test accuracy
value: 47.2
- type: accuracy
name: Spanish Test accuracy
value: 48.2
- type: accuracy
name: Russian Test accuracy
value: 62.7
- type: accuracy
name: Swedish Test accuracy
value: 52.6
- type: accuracy
name: Norwegian Test accuracy
value: 48.6
- type: accuracy
name: Danish Test accuracy
value: 54.3
- type: accuracy
name: Low Saxon Test accuracy
value: 34.7
- type: accuracy
name: Akkadian Test accuracy
value: 38.6
- type: accuracy
name: Armenian Test accuracy
value: 67.0
- type: accuracy
name: Welsh Test accuracy
value: 48.4
- type: accuracy
name: Old East Slavic Test accuracy
value: 55.2
- type: accuracy
name: Albanian Test accuracy
value: 51.8
- type: accuracy
name: Slovenian Test accuracy
value: 46.6
- type: accuracy
name: Guajajara Test accuracy
value: 39.3
- type: accuracy
name: Kurmanji Test accuracy
value: 54.6
- type: accuracy
name: Turkish Test accuracy
value: 65.4
- type: accuracy
name: Finnish Test accuracy
value: 69.1
- type: accuracy
name: Indonesian Test accuracy
value: 59.1
- type: accuracy
name: Ukrainian Test accuracy
value: 63.2
- type: accuracy
name: Polish Test accuracy
value: 60.5
- type: accuracy
name: Portuguese Test accuracy
value: 53.3
- type: accuracy
name: Kazakh Test accuracy
value: 71.9
- type: accuracy
name: Latin Test accuracy
value: 53.5
- type: accuracy
name: Old French Test accuracy
value: 30.0
- type: accuracy
name: Buryat Test accuracy
value: 58.2
- type: accuracy
name: Kaapor Test accuracy
value: 21.7
- type: accuracy
name: Korean Test accuracy
value: 64.5
- type: accuracy
name: Estonian Test accuracy
value: 67.0
- type: accuracy
name: Croatian Test accuracy
value: 57.5
- type: accuracy
name: Gothic Test accuracy
value: 15.4
- type: accuracy
name: Swiss German Test accuracy
value: 34.5
- type: accuracy
name: Assyrian Test accuracy
value: 28.3
- type: accuracy
name: North Sami Test accuracy
value: 35.1
- type: accuracy
name: Naija Test accuracy
value: 16.8
- type: accuracy
name: Latvian Test accuracy
value: 69.6
- type: accuracy
name: Chinese Test accuracy
value: 66.2
- type: accuracy
name: Tagalog Test accuracy
value: 50.4
- type: accuracy
name: Bambara Test accuracy
value: 27.5
- type: accuracy
name: Lithuanian Test accuracy
value: 69.7
- type: accuracy
name: Galician Test accuracy
value: 51.6
- type: accuracy
name: Vietnamese Test accuracy
value: 50.6
- type: accuracy
name: Greek Test accuracy
value: 54.9
- type: accuracy
name: Catalan Test accuracy
value: 46.1
- type: accuracy
name: Czech Test accuracy
value: 61.1
- type: accuracy
name: Erzya Test accuracy
value: 41.3
- type: accuracy
name: Bhojpuri Test accuracy
value: 41.9
- type: accuracy
name: Thai Test accuracy
value: 52.3
- type: accuracy
name: Marathi Test accuracy
value: 77.3
- type: accuracy
name: Basque Test accuracy
value: 68.4
- type: accuracy
name: Slovak Test accuracy
value: 62.3
- type: accuracy
name: Kiche Test accuracy
value: 41.0
- type: accuracy
name: Yoruba Test accuracy
value: 28.8
- type: accuracy
name: Warlpiri Test accuracy
value: 30.4
- type: accuracy
name: Tamil Test accuracy
value: 75.9
- type: accuracy
name: Maltese Test accuracy
value: 29.8
- type: accuracy
name: Ancient Greek Test accuracy
value: 50.2
- type: accuracy
name: Icelandic Test accuracy
value: 54.4
- type: accuracy
name: Mbya Guarani Test accuracy
value: 28.1
- type: accuracy
name: Urdu Test accuracy
value: 46.4
- type: accuracy
name: Romanian Test accuracy
value: 55.4
- type: accuracy
name: Persian Test accuracy
value: 51.8
- type: accuracy
name: Apurina Test accuracy
value: 34.5
- type: accuracy
name: Japanese Test accuracy
value: 92.6
- type: accuracy
name: Hungarian Test accuracy
value: 61.2
- type: accuracy
name: Hindi Test accuracy
value: 48.2
- type: accuracy
name: Classical Chinese Test accuracy
value: 46.1
- type: accuracy
name: Komi Permyak Test accuracy
value: 42.8
- type: accuracy
name: Faroese Test accuracy
value: 51.1
- type: accuracy
name: Sanskrit Test accuracy
value: 33.0
- type: accuracy
name: Livvi Test accuracy
value: 57.2
- type: accuracy
name: Arabic Test accuracy
value: 52.7
- type: accuracy
name: Wolof Test accuracy
value: 32.1
- type: accuracy
name: Bulgarian Test accuracy
value: 55.1
- type: accuracy
name: Akuntsu Test accuracy
value: 41.4
- type: accuracy
name: Makurap Test accuracy
value: 19.9
- type: accuracy
name: Kangri Test accuracy
value: 41.0
- type: accuracy
name: Breton Test accuracy
value: 46.4
- type: accuracy
name: Telugu Test accuracy
value: 71.8
- type: accuracy
name: Cantonese Test accuracy
value: 60.4
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 39.5
- type: accuracy
name: Karelian Test accuracy
value: 60.7
- type: accuracy
name: Upper Sorbian Test accuracy
value: 54.6
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 49.4
- type: accuracy
name: Komi Zyrian Test accuracy
value: 39.8
- type: accuracy
name: Irish Test accuracy
value: 46.8
- type: accuracy
name: Nayini Test accuracy
value: 37.2
- type: accuracy
name: Munduruku Test accuracy
value: 39.3
- type: accuracy
name: Manx Test accuracy
value: 33.9
- type: accuracy
name: Skolt Sami Test accuracy
value: 36.4
- type: accuracy
name: Afrikaans Test accuracy
value: 45.7
- type: accuracy
name: Old Turkish Test accuracy
value: 18.1
- type: accuracy
name: Tupinamba Test accuracy
value: 32.0
- type: accuracy
name: Belarusian Test accuracy
value: 62.6
- type: accuracy
name: Serbian Test accuracy
value: 58.0
- type: accuracy
name: Moksha Test accuracy
value: 42.2
- type: accuracy
name: Western Armenian Test accuracy
value: 62.3
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 38.6
- type: accuracy
name: Khunsari Test accuracy
value: 44.6
- type: accuracy
name: Hebrew Test accuracy
value: 69.8
- type: accuracy
name: Uyghur Test accuracy
value: 65.4
- type: accuracy
name: Chukchi Test accuracy
value: 33.7
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Japanese
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ja")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ja")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-pl | db89d15eec8484b00ff8ea3ae7859315ed3175a0 | 2022-02-25T09:59:13.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"pl",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-pl | 5 | null | transformers | 16,873 |
---
language:
- pl
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-pl
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 80.5
- type: accuracy
name: Dutch Test accuracy
value: 78.3
- type: accuracy
name: German Test accuracy
value: 77.7
- type: accuracy
name: Italian Test accuracy
value: 77.5
- type: accuracy
name: French Test accuracy
value: 78.0
- type: accuracy
name: Spanish Test accuracy
value: 81.7
- type: accuracy
name: Russian Test accuracy
value: 90.6
- type: accuracy
name: Swedish Test accuracy
value: 86.0
- type: accuracy
name: Norwegian Test accuracy
value: 78.9
- type: accuracy
name: Danish Test accuracy
value: 83.3
- type: accuracy
name: Low Saxon Test accuracy
value: 53.5
- type: accuracy
name: Akkadian Test accuracy
value: 35.2
- type: accuracy
name: Armenian Test accuracy
value: 85.1
- type: accuracy
name: Welsh Test accuracy
value: 65.8
- type: accuracy
name: Old East Slavic Test accuracy
value: 76.7
- type: accuracy
name: Albanian Test accuracy
value: 76.9
- type: accuracy
name: Slovenian Test accuracy
value: 86.4
- type: accuracy
name: Guajajara Test accuracy
value: 41.3
- type: accuracy
name: Kurmanji Test accuracy
value: 77.5
- type: accuracy
name: Turkish Test accuracy
value: 77.3
- type: accuracy
name: Finnish Test accuracy
value: 81.5
- type: accuracy
name: Indonesian Test accuracy
value: 79.5
- type: accuracy
name: Ukrainian Test accuracy
value: 92.3
- type: accuracy
name: Polish Test accuracy
value: 98.2
- type: accuracy
name: Portuguese Test accuracy
value: 79.9
- type: accuracy
name: Kazakh Test accuracy
value: 79.5
- type: accuracy
name: Latin Test accuracy
value: 77.5
- type: accuracy
name: Old French Test accuracy
value: 55.9
- type: accuracy
name: Buryat Test accuracy
value: 62.8
- type: accuracy
name: Kaapor Test accuracy
value: 23.3
- type: accuracy
name: Korean Test accuracy
value: 60.7
- type: accuracy
name: Estonian Test accuracy
value: 83.1
- type: accuracy
name: Croatian Test accuracy
value: 93.7
- type: accuracy
name: Gothic Test accuracy
value: 26.6
- type: accuracy
name: Swiss German Test accuracy
value: 48.9
- type: accuracy
name: Assyrian Test accuracy
value: 15.7
- type: accuracy
name: North Sami Test accuracy
value: 45.2
- type: accuracy
name: Naija Test accuracy
value: 42.3
- type: accuracy
name: Latvian Test accuracy
value: 88.5
- type: accuracy
name: Chinese Test accuracy
value: 37.8
- type: accuracy
name: Tagalog Test accuracy
value: 80.2
- type: accuracy
name: Bambara Test accuracy
value: 32.3
- type: accuracy
name: Lithuanian Test accuracy
value: 87.3
- type: accuracy
name: Galician Test accuracy
value: 80.8
- type: accuracy
name: Vietnamese Test accuracy
value: 66.8
- type: accuracy
name: Greek Test accuracy
value: 74.5
- type: accuracy
name: Catalan Test accuracy
value: 76.3
- type: accuracy
name: Czech Test accuracy
value: 91.7
- type: accuracy
name: Erzya Test accuracy
value: 51.7
- type: accuracy
name: Bhojpuri Test accuracy
value: 53.3
- type: accuracy
name: Thai Test accuracy
value: 60.2
- type: accuracy
name: Marathi Test accuracy
value: 86.5
- type: accuracy
name: Basque Test accuracy
value: 77.5
- type: accuracy
name: Slovak Test accuracy
value: 91.7
- type: accuracy
name: Kiche Test accuracy
value: 39.4
- type: accuracy
name: Yoruba Test accuracy
value: 31.1
- type: accuracy
name: Warlpiri Test accuracy
value: 43.7
- type: accuracy
name: Tamil Test accuracy
value: 83.2
- type: accuracy
name: Maltese Test accuracy
value: 30.9
- type: accuracy
name: Ancient Greek Test accuracy
value: 60.6
- type: accuracy
name: Icelandic Test accuracy
value: 80.1
- type: accuracy
name: Mbya Guarani Test accuracy
value: 33.5
- type: accuracy
name: Urdu Test accuracy
value: 70.0
- type: accuracy
name: Romanian Test accuracy
value: 81.4
- type: accuracy
name: Persian Test accuracy
value: 78.6
- type: accuracy
name: Apurina Test accuracy
value: 46.6
- type: accuracy
name: Japanese Test accuracy
value: 28.7
- type: accuracy
name: Hungarian Test accuracy
value: 73.9
- type: accuracy
name: Hindi Test accuracy
value: 74.8
- type: accuracy
name: Classical Chinese Test accuracy
value: 27.9
- type: accuracy
name: Komi Permyak Test accuracy
value: 52.9
- type: accuracy
name: Faroese Test accuracy
value: 75.9
- type: accuracy
name: Sanskrit Test accuracy
value: 34.1
- type: accuracy
name: Livvi Test accuracy
value: 65.3
- type: accuracy
name: Arabic Test accuracy
value: 78.9
- type: accuracy
name: Wolof Test accuracy
value: 38.9
- type: accuracy
name: Bulgarian Test accuracy
value: 91.0
- type: accuracy
name: Akuntsu Test accuracy
value: 39.8
- type: accuracy
name: Makurap Test accuracy
value: 24.0
- type: accuracy
name: Kangri Test accuracy
value: 52.6
- type: accuracy
name: Breton Test accuracy
value: 61.7
- type: accuracy
name: Telugu Test accuracy
value: 80.2
- type: accuracy
name: Cantonese Test accuracy
value: 45.6
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 50.9
- type: accuracy
name: Karelian Test accuracy
value: 69.1
- type: accuracy
name: Upper Sorbian Test accuracy
value: 77.5
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 65.4
- type: accuracy
name: Komi Zyrian Test accuracy
value: 45.5
- type: accuracy
name: Irish Test accuracy
value: 63.7
- type: accuracy
name: Nayini Test accuracy
value: 42.3
- type: accuracy
name: Munduruku Test accuracy
value: 30.0
- type: accuracy
name: Manx Test accuracy
value: 39.2
- type: accuracy
name: Skolt Sami Test accuracy
value: 42.4
- type: accuracy
name: Afrikaans Test accuracy
value: 74.6
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 47.0
- type: accuracy
name: Belarusian Test accuracy
value: 90.6
- type: accuracy
name: Serbian Test accuracy
value: 94.0
- type: accuracy
name: Moksha Test accuracy
value: 48.5
- type: accuracy
name: Western Armenian Test accuracy
value: 80.7
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 55.0
- type: accuracy
name: Khunsari Test accuracy
value: 43.2
- type: accuracy
name: Hebrew Test accuracy
value: 72.9
- type: accuracy
name: Uyghur Test accuracy
value: 74.9
- type: accuracy
name: Chukchi Test accuracy
value: 39.1
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Polish
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-pl")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-pl")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-uk | 32633d467ae6fd694a64879b78813e49ec9ae8a7 | 2022-02-25T09:59:34.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"uk",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-uk | 5 | null | transformers | 16,874 |
---
language:
- uk
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-uk
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 82.2
- type: accuracy
name: Dutch Test accuracy
value: 84.3
- type: accuracy
name: German Test accuracy
value: 82.4
- type: accuracy
name: Italian Test accuracy
value: 83.9
- type: accuracy
name: French Test accuracy
value: 82.6
- type: accuracy
name: Spanish Test accuracy
value: 86.2
- type: accuracy
name: Russian Test accuracy
value: 93.3
- type: accuracy
name: Swedish Test accuracy
value: 86.3
- type: accuracy
name: Norwegian Test accuracy
value: 80.2
- type: accuracy
name: Danish Test accuracy
value: 85.2
- type: accuracy
name: Low Saxon Test accuracy
value: 30.9
- type: accuracy
name: Akkadian Test accuracy
value: 17.5
- type: accuracy
name: Armenian Test accuracy
value: 87.7
- type: accuracy
name: Welsh Test accuracy
value: 66.8
- type: accuracy
name: Old East Slavic Test accuracy
value: 77.5
- type: accuracy
name: Albanian Test accuracy
value: 79.7
- type: accuracy
name: Slovenian Test accuracy
value: 84.5
- type: accuracy
name: Guajajara Test accuracy
value: 14.6
- type: accuracy
name: Kurmanji Test accuracy
value: 77.0
- type: accuracy
name: Turkish Test accuracy
value: 76.3
- type: accuracy
name: Finnish Test accuracy
value: 82.5
- type: accuracy
name: Indonesian Test accuracy
value: 77.0
- type: accuracy
name: Ukrainian Test accuracy
value: 98.2
- type: accuracy
name: Polish Test accuracy
value: 91.8
- type: accuracy
name: Portuguese Test accuracy
value: 84.1
- type: accuracy
name: Kazakh Test accuracy
value: 81.8
- type: accuracy
name: Latin Test accuracy
value: 77.9
- type: accuracy
name: Old French Test accuracy
value: 26.9
- type: accuracy
name: Buryat Test accuracy
value: 60.7
- type: accuracy
name: Kaapor Test accuracy
value: 5.4
- type: accuracy
name: Korean Test accuracy
value: 61.5
- type: accuracy
name: Estonian Test accuracy
value: 84.4
- type: accuracy
name: Croatian Test accuracy
value: 93.2
- type: accuracy
name: Gothic Test accuracy
value: 3.7
- type: accuracy
name: Swiss German Test accuracy
value: 35.0
- type: accuracy
name: Assyrian Test accuracy
value: 14.6
- type: accuracy
name: North Sami Test accuracy
value: 27.0
- type: accuracy
name: Naija Test accuracy
value: 22.5
- type: accuracy
name: Latvian Test accuracy
value: 88.9
- type: accuracy
name: Chinese Test accuracy
value: 51.9
- type: accuracy
name: Tagalog Test accuracy
value: 71.1
- type: accuracy
name: Bambara Test accuracy
value: 18.7
- type: accuracy
name: Lithuanian Test accuracy
value: 88.1
- type: accuracy
name: Galician Test accuracy
value: 85.8
- type: accuracy
name: Vietnamese Test accuracy
value: 66.3
- type: accuracy
name: Greek Test accuracy
value: 85.9
- type: accuracy
name: Catalan Test accuracy
value: 84.0
- type: accuracy
name: Czech Test accuracy
value: 92.1
- type: accuracy
name: Erzya Test accuracy
value: 49.4
- type: accuracy
name: Bhojpuri Test accuracy
value: 51.8
- type: accuracy
name: Thai Test accuracy
value: 63.3
- type: accuracy
name: Marathi Test accuracy
value: 88.3
- type: accuracy
name: Basque Test accuracy
value: 75.7
- type: accuracy
name: Slovak Test accuracy
value: 91.8
- type: accuracy
name: Kiche Test accuracy
value: 22.7
- type: accuracy
name: Yoruba Test accuracy
value: 20.0
- type: accuracy
name: Warlpiri Test accuracy
value: 32.4
- type: accuracy
name: Tamil Test accuracy
value: 81.7
- type: accuracy
name: Maltese Test accuracy
value: 16.6
- type: accuracy
name: Ancient Greek Test accuracy
value: 63.0
- type: accuracy
name: Icelandic Test accuracy
value: 81.4
- type: accuracy
name: Mbya Guarani Test accuracy
value: 23.7
- type: accuracy
name: Urdu Test accuracy
value: 64.1
- type: accuracy
name: Romanian Test accuracy
value: 82.6
- type: accuracy
name: Persian Test accuracy
value: 78.3
- type: accuracy
name: Apurina Test accuracy
value: 24.8
- type: accuracy
name: Japanese Test accuracy
value: 38.0
- type: accuracy
name: Hungarian Test accuracy
value: 82.2
- type: accuracy
name: Hindi Test accuracy
value: 68.3
- type: accuracy
name: Classical Chinese Test accuracy
value: 36.6
- type: accuracy
name: Komi Permyak Test accuracy
value: 46.0
- type: accuracy
name: Faroese Test accuracy
value: 73.6
- type: accuracy
name: Sanskrit Test accuracy
value: 13.9
- type: accuracy
name: Livvi Test accuracy
value: 59.5
- type: accuracy
name: Arabic Test accuracy
value: 82.1
- type: accuracy
name: Wolof Test accuracy
value: 18.5
- type: accuracy
name: Bulgarian Test accuracy
value: 91.1
- type: accuracy
name: Akuntsu Test accuracy
value: 15.2
- type: accuracy
name: Makurap Test accuracy
value: 2.1
- type: accuracy
name: Kangri Test accuracy
value: 51.4
- type: accuracy
name: Breton Test accuracy
value: 59.3
- type: accuracy
name: Telugu Test accuracy
value: 84.3
- type: accuracy
name: Cantonese Test accuracy
value: 53.8
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 48.0
- type: accuracy
name: Karelian Test accuracy
value: 68.6
- type: accuracy
name: Upper Sorbian Test accuracy
value: 71.7
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 68.9
- type: accuracy
name: Komi Zyrian Test accuracy
value: 40.4
- type: accuracy
name: Irish Test accuracy
value: 66.2
- type: accuracy
name: Nayini Test accuracy
value: 46.2
- type: accuracy
name: Munduruku Test accuracy
value: 8.0
- type: accuracy
name: Manx Test accuracy
value: 23.0
- type: accuracy
name: Skolt Sami Test accuracy
value: 27.7
- type: accuracy
name: Afrikaans Test accuracy
value: 81.7
- type: accuracy
name: Old Turkish Test accuracy
value: 39.8
- type: accuracy
name: Tupinamba Test accuracy
value: 20.2
- type: accuracy
name: Belarusian Test accuracy
value: 93.7
- type: accuracy
name: Serbian Test accuracy
value: 93.8
- type: accuracy
name: Moksha Test accuracy
value: 46.0
- type: accuracy
name: Western Armenian Test accuracy
value: 79.8
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 56.3
- type: accuracy
name: Khunsari Test accuracy
value: 36.5
- type: accuracy
name: Hebrew Test accuracy
value: 84.4
- type: accuracy
name: Uyghur Test accuracy
value: 77.2
- type: accuracy
name: Chukchi Test accuracy
value: 35.0
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Ukrainian
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-uk")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-uk")
```
|
debjyoti007/new_doc_classifier | 6b9717f8154b082dfa34b809768b9d332c3a59c6 | 2022-02-24T13:22:54.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | debjyoti007 | null | debjyoti007/new_doc_classifier | 5 | null | transformers | 16,875 | This model has been trained for the purpose of classifying text from different domains. Currently it is trained with much lesser data and it has been trained to identify text from 3 domains, "sports", "healthcare" and "financial". Label_0 represents "financial", Label_1 represents "Healthcare" and Label_2 represents "Sports". Currently I have trained it with these 3 domains only, I am pretty soon planning to train it on more domains and more data, hence its accuracy will improve further too. |
lilitket/wav2vec2-large-xls-r-armenian-colab | 8de2b0de5cccfd0c42d0fe14fc1cd4517c066b18 | 2022-02-24T14:51:52.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lilitket | null | lilitket/wav2vec2-large-xls-r-armenian-colab | 5 | null | transformers | 16,876 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-armenian-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-armenian-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
DoyyingFace/bert-tweets-semeval-unclean | 9576c583a95ff05a923ee1e6d944902160424d60 | 2022-02-24T14:35:55.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | DoyyingFace | null | DoyyingFace/bert-tweets-semeval-unclean | 5 | null | transformers | 16,877 | Entry not found |
DoyyingFace/bert-tweets-semeval-clean | 0864a7c8cbbd6f8e3360890a6efea9a3705609d9 | 2022-02-24T14:44:21.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | DoyyingFace | null | DoyyingFace/bert-tweets-semeval-clean | 5 | null | transformers | 16,878 | Entry not found |
DoyyingFace/bert-asian-hate-tweets-concat-clean-with-unclean-valid | f21bc56cf7b429142c0eb2a4e96fa49643bde80d | 2022-02-24T15:09:09.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-concat-clean-with-unclean-valid | 5 | null | transformers | 16,879 | Entry not found |
DoyyingFace/bert-asian-hate-tweets-self-unlean-with-clean-valid | 46b5ea18f5435a8b30697adc78294635ca23558e | 2022-02-24T15:48:05.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-self-unlean-with-clean-valid | 5 | null | transformers | 16,880 | Entry not found |
DoyyingFace/bert-asian-hate-tweets-asian-unclean-with-clean-valid | 01e9eefd8dbf3bb8d5f72e5934dea128f33cf98f | 2022-02-24T16:20:51.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-asian-unclean-with-clean-valid | 5 | null | transformers | 16,881 | Entry not found |
inovex/multi2convai-quality-it-bert | fcc2e2f7a6f4f0f4c2a0de6d68851318d3a14c14 | 2022-03-01T09:02:08.000Z | [
"pytorch",
"bert",
"text-classification",
"it",
"transformers",
"license:mit"
] | text-classification | false | inovex | null | inovex/multi2convai-quality-it-bert | 5 | null | transformers | 16,882 | ---
tags:
- text-classification
widget:
- text: "Avviare il programma"
license: mit
language: it
---
# Multi2ConvAI-Quality: finetuned Bert for Italian
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Italian (it)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-it-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-it-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
DoyyingFace/bert-asian-hate-tweets-asian-unclean-freeze-4 | 7ab266823becab9ebb619d4bdd99dd994a2ea112 | 2022-02-24T16:38:43.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-asian-unclean-freeze-4 | 5 | null | transformers | 16,883 | Entry not found |
ASCCCCCCCC/distilbert-base-uncased-finetuned-amazon_zh_20000 | 358e2a5e2453dd603fd0a68dd87ccbfc3b977900 | 2022-02-25T03:38:48.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ASCCCCCCCC | null | ASCCCCCCCC/distilbert-base-uncased-finetuned-amazon_zh_20000 | 5 | null | transformers | 16,884 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-amazon_zh_20000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-amazon_zh_20000
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3516
- Accuracy: 0.414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4343 | 1.0 | 1250 | 1.3516 | 0.414 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.18.3
- Tokenizers 0.10.3
|
DoyyingFace/bert-asian-hate-tweets-self-unclean-freeze-8 | 888307992689499910770d2c55e65dbd5eea4ed6 | 2022-02-25T03:14:45.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-self-unclean-freeze-8 | 5 | null | transformers | 16,885 | Entry not found |
Brendan/cse244b-hw2-roberta | 3391da53f2d756071d84b8f984d343d5168785f9 | 2022-02-26T05:48:15.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | Brendan | null | Brendan/cse244b-hw2-roberta | 5 | null | transformers | 16,886 | Entry not found |
DoyyingFace/bert-asian-hate-tweets-concat-unclean-discriminate | 0b18a9be1e97044218c2016ffd9ed496b51ec981 | 2022-02-25T04:03:13.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-concat-unclean-discriminate | 5 | null | transformers | 16,887 | Entry not found |
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-50 | c7d43f39111f81ac19b7701c765c32372b5e9b45 | 2022-02-25T04:10:10.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-50 | 5 | null | transformers | 16,888 | Entry not found |
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-25 | 638c3d6f269527137a57a32cbc69738948fb0d7f | 2022-02-25T04:28:23.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-25 | 5 | null | transformers | 16,889 | Entry not found |
MhF/distilbert-base-uncased-distilled-clinc | e14a105707812b39a376260b002931e96c2bc466 | 2022-02-25T10:48:47.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | MhF | null | MhF/distilbert-base-uncased-distilled-clinc | 5 | null | transformers | 16,890 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9461290322580646
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2663
- Accuracy: 0.9461
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.1991 | 1.0 | 318 | 3.1495 | 0.7523 |
| 2.4112 | 2.0 | 636 | 1.5868 | 0.8510 |
| 1.1887 | 3.0 | 954 | 0.7975 | 0.9203 |
| 0.5952 | 4.0 | 1272 | 0.4870 | 0.9319 |
| 0.3275 | 5.0 | 1590 | 0.3571 | 0.9419 |
| 0.2066 | 6.0 | 1908 | 0.3070 | 0.9429 |
| 0.1456 | 7.0 | 2226 | 0.2809 | 0.9448 |
| 0.1154 | 8.0 | 2544 | 0.2697 | 0.9468 |
| 0.1011 | 9.0 | 2862 | 0.2663 | 0.9461 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
vocab-transformers/cross_encoder-msmarco-distilbert-word2vec256k | 9bdbf4c75b56d718011677f0034c8d02353b4ba2 | 2022-02-25T12:58:31.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | vocab-transformers | null | vocab-transformers/cross_encoder-msmarco-distilbert-word2vec256k | 5 | null | transformers | 16,891 | #cross_encoder-msmarco-word2vec256k
This CrossEncoder was trained with MarginMSE loss from the [nicoladecao/msmarco-word2vec256000-distilbert-base-uncased](https://hf.co/nicoladecao/msmarco-word2vec256000-distilbert-base-uncased) checkpoint. **Word embedding matrix has been frozen during training**.
You can load the model with [sentence-transformers](https://sbert.net):
```python
from sentence_transformers import CrossEncoder
from torch import nn
model = CrossEncoder(model_name, default_activation_function=nn.Identity())
```
Performance on TREC Deep Learning (nDCG@10):
- TREC-DL 19: 72.49
- TREC-DL 20: 72.71
|
anas-awadalla/spanbert-base-cased-few-shot-k-32-finetuned-squad-seed-8 | 22612cfb410d68d2c5b4138891a88d7fe5c61593 | 2022-02-25T21:42:45.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-32-finetuned-squad-seed-8 | 5 | null | transformers | 16,892 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-32-finetuned-squad-seed-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-32-finetuned-squad-seed-8
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
DoyyingFace/bert-asian-hate-tweets-self-clean-small-more-epoch | 395904a9ab7b1144fd37aec516f31c1ee00cc7ce | 2022-02-26T02:56:57.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-self-clean-small-more-epoch | 5 | null | transformers | 16,893 | Entry not found |
DoyyingFace/bert-asian-hate-tweets-self-clean-small-epoch5 | b73fdc9a750018ec3375f4f35eacf1258ef8f0e8 | 2022-02-26T03:07:52.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-self-clean-small-epoch5 | 5 | null | transformers | 16,894 | Entry not found |
ali2066/finetuned_sentence_itr3_2e-05_all_26_02_2022-04_14_37 | 1ab288fb6dbe5d1667a98a94bacb330e0c76dd5f | 2022-02-26T03:20:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ali2066 | null | ali2066/finetuned_sentence_itr3_2e-05_all_26_02_2022-04_14_37 | 5 | null | transformers | 16,895 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr3_2e-05_all_26_02_2022-04_14_37
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr3_2e-05_all_26_02_2022-04_14_37
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4676
- Accuracy: 0.8299
- F1: 0.8892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 |
| No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 |
| 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 |
| 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 |
| 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr4_2e-05_all_26_02_2022-04_20_09 | 92c19bb87702f101232be3f8bbe38312a5683331 | 2022-02-26T03:25:34.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ali2066 | null | ali2066/finetuned_sentence_itr4_2e-05_all_26_02_2022-04_20_09 | 5 | null | transformers | 16,896 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr4_2e-05_all_26_02_2022-04_20_09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr4_2e-05_all_26_02_2022-04_20_09
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4676
- Accuracy: 0.8299
- F1: 0.8892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 |
| No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 |
| 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 |
| 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 |
| 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-10 | 4cce790d221fee0441231cefd93a490c9eca8131 | 2022-02-26T09:47:52.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-10 | 5 | null | transformers | 16,897 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-10
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
msintaha/bert-base-uncased-copa-kb-17 | 990f2ce5f4be7e5432dd9ecf70c98c995be0287f | 2022-02-26T22:53:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"dataset:super_glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | multiple-choice | false | msintaha | null | msintaha/bert-base-uncased-copa-kb-17 | 5 | null | transformers | 16,898 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-copa-kb-17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-copa-kb-17
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6385
- Accuracy: 0.7000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.6792 | 0.6500 |
| No log | 2.0 | 50 | 0.6385 | 0.7000 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ali2066/finetuned_sentence_itr0_2e-05_all_27_02_2022-17_27_47 | d3feb3ffc111f2ec1e26d9ee7d1990aba53f78db | 2022-02-27T16:33:17.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ali2066 | null | ali2066/finetuned_sentence_itr0_2e-05_all_27_02_2022-17_27_47 | 5 | null | transformers | 16,899 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_2e-05_all_27_02_2022-17_27_47
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_27_02_2022-17_27_47
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5002
- Accuracy: 0.8103
- F1: 0.8764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.4178 | 0.7963 | 0.8630 |
| No log | 2.0 | 390 | 0.3935 | 0.8061 | 0.8770 |
| 0.4116 | 3.0 | 585 | 0.4037 | 0.8085 | 0.8735 |
| 0.4116 | 4.0 | 780 | 0.4696 | 0.8146 | 0.8796 |
| 0.4116 | 5.0 | 975 | 0.4849 | 0.8207 | 0.8823 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.