modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
shacharm/wav2vec2-large-xls-r-300m-ja-colab | 46859476b0d95d65562f482f1e7f2872021b664e | 2022-02-07T06:15:51.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | shacharm | null | shacharm/wav2vec2-large-xls-r-300m-ja-colab | 6 | null | transformers | 15,400 | Entry not found |
silky/deep-todo | 4cbce34a526969a0a751765d0cb85d7e00645eed | 2021-06-18T08:20:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | silky | null | silky/deep-todo | 6 | null | transformers | 15,401 | # deep-todo
Wondering what to do? Not anymore!
Generate arbitrary todo's.
Source: <https://colab.research.google.com/drive/1PlKLrGHaCuvWCKNC4fmQEMElF-iRec9f?usp=sharing>
The todo's come from a random selection of (public) repositories I had on my computer.
### Sample
A bunch of todo's:
```
----------------------------------------------------------------------------------------------------
0: TODO: should we check the other edges?/
1: TODO: add more information here.
2: TODO: We could also add more general functions in this case to avoid/
3: TODO: It seems strange to have the same constructor when the base set of/
4: TODO: This implementation should be simplified, as it's too complex to handle the/
5: TODO: we should be able to relax the intrinsic if not
6: TODO: Make sure this doesn't go through the next generation of plugins. It would be better if this was
7: TODO: There is always a small number of errors when we have this type/
8: TODO: Add support for 't' values (not 't') for all the constant types/
9: TODO: Check that we use loglef_cxx in the loop*
10: TODO: Support double or double values./
11: TODO: Add tests that verify that this function does not work for all targets/
12: TODO: we'd expect the result to be identical to the same value in terms of
13: TODO: We are not using a new type for 'w' as it does not denote 'y' yet, so we could/
14: TODO: if we had to find a way to extract the source file directly, we would/
15: TODO: this should fold into a flat array that would be/
16: TODO: Check if we can make it work with the correct address./
17: TODO: support v2i with V2R4+
18: TODO: Can a fast-math-flags check be generalized to all types of data? */
19: TODO: Add support for other type-specific VOPs.
```
Generated by:
```
tf.random.set_seed(0)
sample_outputs = model.generate(
input_ids,
do_sample=True,
max_length=40,
top_k=50,
top_p=0.95,
num_return_sequences=20
)
print("Output:\\
" + 100 * '-')
for i, sample_output in enumerate(sample_outputs):
m = tokenizer.decode(sample_output, skip_special_tokens=True)
m = m.split("TODO")[1].strip()
print("{}: TODO{}".format(i, m))
```
## TODO
- [ ] Fixup the data; it seems to contain multiple todo's per line
- [ ] Preprocess the data in a better way
- [ ] Download github and train it on everything |
sismetanin/sbert-ru-sentiment-krnd | ac271b75c142d37034da80654365bc6c5405bdda | 2021-05-20T06:27:51.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"ru",
"transformers",
"sentiment analysis",
"Russian",
"SBERT-Large"
]
| text-classification | false | sismetanin | null | sismetanin/sbert-ru-sentiment-krnd | 6 | null | transformers | 15,402 | ---
language:
- ru
tags:
- sentiment analysis
- Russian
- SBERT-Large
---
## SBERT-Large on Kaggle Russian News Dataset
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F<sub>1</sub></td>
<td>macro F<sub>1</sub></td>
<td>F<sub>1</sub></td>
<td>micro F<sub>1</sub></td>
<td>macro F<sub>1</sub></td>
<td>F<sub>1</sub></td>
<td>wighted F<sub>1</sub></td>
<td>F<sub>1</sub></td>
<td>F<sub>1</sub></td>
<td>F<sub>1</sub></td>
<td>F<sub>1</sub></td>
<td>F<sub>1</sub></td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
|
socialmediaie/TRAC2020_ENG_A_bert-base-uncased | 22820fee8a5882cc8f372a04a0baf69747ac580b | 2021-05-20T06:55:44.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | socialmediaie | null | socialmediaie/TRAC2020_ENG_A_bert-base-uncased | 6 | null | transformers | 15,403 | # Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020
Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying.
Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752#
We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice.
Our approach is described in our paper titled:
> Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020
NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper.
If you plan to use the dataset please cite the following resources:
* Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
* Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1.
```
@inproceedings{Mishra2020TRAC,
author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)},
title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
year = {2020}
}
@data{illinoisdatabankIDB-8882752,
author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
doi = {10.13012/B2IDB-8882752_V1},
publisher = {University of Illinois at Urbana-Champaign},
title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1},
year = {2020}
}
```
## Usage
The models can be used via the following code:
```python
from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification
import torch
from pathlib import Path
from scipy.special import softmax
import numpy as np
import pandas as pd
TASK_LABEL_IDS = {
"Sub-task A": ["OAG", "NAG", "CAG"],
"Sub-task B": ["GEN", "NGEN"],
"Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"]
}
model_version="databank" # other option is hugging face library
if model_version == "databank":
# Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752
# Unzip the file at some model_path (we are using: "databank_model")
model_path = next(Path("databank_model").glob("./*/output/*/model"))
# Assuming you get the following type of structure inside "databank_model"
# 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model'
lang, task, _, base_model, _ = model_path.parts
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
else:
lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased"
base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(base_model)
# For doing inference set model in eval mode
model.eval()
# If you want to further fine-tune the model you can reset the model to model.train()
task_labels = TASK_LABEL_IDS[task]
sentence = "This is a good cat and this is a bad dog."
processed_sentence = f"{tokenizer.cls_token} {sentence}"
tokens = tokenizer.tokenize(sentence)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokens)
tokens_tensor = torch.tensor([indexed_tokens])
with torch.no_grad():
logits, = model(tokens_tensor, labels=None)
logits
preds = logits.detach().cpu().numpy()
preds_probs = softmax(preds, axis=1)
preds = np.argmax(preds_probs, axis=1)
preds_labels = np.array(task_labels)[preds]
print(dict(zip(task_labels, preds_probs[0])), preds_labels)
"""You should get an output as follows:
({'CAG-GEN': 0.06762535,
'CAG-NGEN': 0.03244293,
'NAG-GEN': 0.6897794,
'NAG-NGEN': 0.15498641,
'OAG-GEN': 0.034373745,
'OAG-NGEN': 0.020792078},
array(['NAG-GEN'], dtype='<U8'))
"""
``` |
socialmediaie/TRAC2020_HIN_A_bert-base-multilingual-uncased | 5859d2a1675d792e1627ea69154482851994a4a4 | 2021-05-20T06:58:51.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | socialmediaie | null | socialmediaie/TRAC2020_HIN_A_bert-base-multilingual-uncased | 6 | null | transformers | 15,404 | # Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020
Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying.
Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752#
We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice.
Our approach is described in our paper titled:
> Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020
NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper.
If you plan to use the dataset please cite the following resources:
* Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
* Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1.
```
@inproceedings{Mishra2020TRAC,
author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)},
title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
year = {2020}
}
@data{illinoisdatabankIDB-8882752,
author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
doi = {10.13012/B2IDB-8882752_V1},
publisher = {University of Illinois at Urbana-Champaign},
title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1},
year = {2020}
}
```
## Usage
The models can be used via the following code:
```python
from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification
import torch
from pathlib import Path
from scipy.special import softmax
import numpy as np
import pandas as pd
TASK_LABEL_IDS = {
"Sub-task A": ["OAG", "NAG", "CAG"],
"Sub-task B": ["GEN", "NGEN"],
"Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"]
}
model_version="databank" # other option is hugging face library
if model_version == "databank":
# Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752
# Unzip the file at some model_path (we are using: "databank_model")
model_path = next(Path("databank_model").glob("./*/output/*/model"))
# Assuming you get the following type of structure inside "databank_model"
# 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model'
lang, task, _, base_model, _ = model_path.parts
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
else:
lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased"
base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(base_model)
# For doing inference set model in eval mode
model.eval()
# If you want to further fine-tune the model you can reset the model to model.train()
task_labels = TASK_LABEL_IDS[task]
sentence = "This is a good cat and this is a bad dog."
processed_sentence = f"{tokenizer.cls_token} {sentence}"
tokens = tokenizer.tokenize(sentence)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokens)
tokens_tensor = torch.tensor([indexed_tokens])
with torch.no_grad():
logits, = model(tokens_tensor, labels=None)
logits
preds = logits.detach().cpu().numpy()
preds_probs = softmax(preds, axis=1)
preds = np.argmax(preds_probs, axis=1)
preds_labels = np.array(task_labels)[preds]
print(dict(zip(task_labels, preds_probs[0])), preds_labels)
"""You should get an output as follows:
({'CAG-GEN': 0.06762535,
'CAG-NGEN': 0.03244293,
'NAG-GEN': 0.6897794,
'NAG-NGEN': 0.15498641,
'OAG-GEN': 0.034373745,
'OAG-NGEN': 0.020792078},
array(['NAG-GEN'], dtype='<U8'))
"""
``` |
sontn122/xlm-roberta-large-finetuned-squad-v2_15102021 | a65fb96a78b4729c84b66578360161a68618264b | 2021-10-15T02:19:34.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| question-answering | false | sontn122 | null | sontn122/xlm-roberta-large-finetuned-squad-v2_15102021 | 6 | null | transformers | 15,405 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: xlm-roberta-large-finetuned-squad-v2_15102021
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-finetuned-squad-v2_15102021
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 17.5548
- eval_runtime: 168.7788
- eval_samples_per_second: 23.368
- eval_steps_per_second: 5.842
- epoch: 8.0
- step: 7600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.1
- Tokenizers 0.10.3
|
sshleifer/mar_enro_6_3_student | feedbcae51ccc586fffc8ba9d18a00e089e14a7d | 2020-11-04T14:45:05.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | sshleifer | null | sshleifer/mar_enro_6_3_student | 6 | null | transformers | 15,406 | Entry not found |
superspray/distilbert_base_squad2_custom_dataset | 030f1d6a7b72c789af431dba3866aed0e15e256c | 2021-02-20T07:33:31.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | superspray | null | superspray/distilbert_base_squad2_custom_dataset | 6 | null | transformers | 15,407 | # Question & Answering Model for 'Save Your Minutes' from Dobby-AI
Distilbert_Base fine-tuned on SQuAD2.0 and custom QA dataset
This model is [twmkn9/distilbert-base-uncased-squad2] trained on additional custom dataset as:
```
!python3 run_squad.py --model_type distilbert \
--model_name_or_path /content/distilbert_base_384 \
--do_lower_case \
--output_dir /content/model/\
--do_train \
--train_file $data_dir/additional_qa.json\
--version_2_with_negative \
--do_lower_case \
--num_train_epochs 3 \
--weight_decay 0.01 \
--learning_rate 3e-5 \
--max_grad_norm 0.5 \
--adam_epsilon 1e-6 \
--max_seq_length 512 \
--doc_stride 128 \
--threads 12 \
--logging_steps 50 \
--save_steps 1000 \
--overwrite_output_dir \
--per_gpu_train_batch_size 4
```
We used Google Colab for training the model, |
suzuki/distilbert-base-uncased-finetuned-squad | 2f87b707f7457da1facfb245cb20d5b1fdc978e9 | 2021-10-18T12:41:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | suzuki | null | suzuki/distilbert-base-uncased-finetuned-squad | 6 | null | transformers | 15,408 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2962
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3817 | 1.0 | 2767 | 1.2962 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
swcrazyfan/TEFL-blogging-9K | 3515e666ee507fe7f4f2b6083df65dacc258b587 | 2021-06-03T01:32:49.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
]
| text-generation | false | swcrazyfan | null | swcrazyfan/TEFL-blogging-9K | 6 | null | transformers | 15,409 | Entry not found |
tals/albert-base-vitaminc_rationale | 7913c29e658d6100d17b284136662761241fb650 | 2022-06-22T23:57:03.000Z | [
"pytorch",
"albert",
"python",
"dataset:fever",
"dataset:glue",
"dataset:tals/vitaminc",
"transformers"
]
| null | false | tals | null | tals/albert-base-vitaminc_rationale | 6 | null | transformers | 15,410 | ---
language: python
datasets:
- fever
- glue
- tals/vitaminc
---
# Details
Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`).
For more details see: https://github.com/TalSchuster/VitaminC
When using this model, please cite the paper.
# BibTeX entry and citation info
```bibtex
@inproceedings{schuster-etal-2021-get,
title = "Get Your Vitamin {C}! Robust Fact Verification with Contrastive Evidence",
author = "Schuster, Tal and
Fisch, Adam and
Barzilay, Regina",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.52",
doi = "10.18653/v1/2021.naacl-main.52",
pages = "624--643",
abstract = "Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness{---}improving accuracy by 10{\%} on adversarial fact verification and 6{\%} on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.",
}
```
|
tanmoyio/test-model | ecc7c8bb5e43856baf047d9c9842233e75a3ea40 | 2022-01-25T15:08:18.000Z | [
"pytorch",
"bert",
"transformers"
]
| null | false | tanmoyio | null | tanmoyio/test-model | 6 | null | transformers | 15,411 | Entry not found |
tartuNLP/EstBERT_UPOS_128 | 41619fa14f77cee6dbd1eeb4caaf5effc86c0df9 | 2022-05-03T07:49:00.000Z | [
"pytorch",
"bert",
"token-classification",
"et",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
]
| token-classification | false | tartuNLP | null | tartuNLP/EstBERT_UPOS_128 | 6 | null | transformers | 15,412 | ---
language: et
license: cc-by-4.0
--- |
tartuNLP/EstBERT_XPOS_128 | 84b59b11658c3d5d74a0097a26985364c98834b0 | 2022-05-03T07:48:25.000Z | [
"pytorch",
"bert",
"token-classification",
"et",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
]
| token-classification | false | tartuNLP | null | tartuNLP/EstBERT_XPOS_128 | 6 | null | transformers | 15,413 | ---
language: et
license: cc-by-4.0
--- |
tasosk/bert-base-uncased-airlines | a509ec07cbe9a8b5a1687e28bfc4f4f865157276 | 2021-12-18T20:20:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | tasosk | null | tasosk/bert-base-uncased-airlines | 6 | null | transformers | 15,414 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-airlines
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-airlines
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3458
- Accuracy: 0.9021
- F1: 0.9022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 405 | 0.3230 | 0.8754 | 0.8750 |
| 0.4658 | 2.0 | 810 | 0.2738 | 0.8986 | 0.8985 |
| 0.2473 | 3.0 | 1215 | 0.2944 | 0.9110 | 0.9111 |
| 0.2498 | 4.0 | 1620 | 0.3322 | 0.8950 | 0.8949 |
| 0.2174 | 5.0 | 2025 | 0.3342 | 0.9021 | 0.9021 |
| 0.2174 | 6.0 | 2430 | 0.3526 | 0.8986 | 0.8985 |
| 0.2055 | 7.0 | 2835 | 0.3458 | 0.9021 | 0.9022 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
textattack/xlnet-base-cased-STS-B | 0d4702ffb57ef25b02e5aad01cfae7c041e5ec12 | 2020-07-06T16:33:08.000Z | [
"pytorch",
"xlnet",
"text-generation",
"transformers"
]
| text-generation | false | textattack | null | textattack/xlnet-base-cased-STS-B | 6 | null | transformers | 15,415 | ## TextAttack Model Card
This `xlnet-base-cased` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 8, a learning
rate of 5e-05, and a maximum sequence length of 128.
Since this was a regression task, the model was trained with a mean squared error loss function.
The best score the model achieved on this task was 0.8892630070017784, as measured by the
eval set pearson correlation, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
thomasdehaene/gpt2-large-dutch-finetune-oscar-10m-3epoch | 2b248e5bd4a1ccf3892df606296c2db77c7f1afd | 2021-05-23T13:08:54.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | thomasdehaene | null | thomasdehaene/gpt2-large-dutch-finetune-oscar-10m-3epoch | 6 | null | transformers | 15,416 | Entry not found |
tiennvcs/bert-base-uncased-finetuned-docvqa | 50c10b2bb427fdaa619ae899e8123e63b1277d6e | 2021-10-22T15:49:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | tiennvcs | null | tiennvcs/bert-base-uncased-finetuned-docvqa | 6 | null | transformers | 15,417 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-docvqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-docvqa
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9146
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2151 | 0.1 | 1000 | 2.6299 |
| 1.8885 | 0.21 | 2000 | 2.2217 |
| 1.7353 | 0.31 | 3000 | 2.1675 |
| 1.6188 | 0.41 | 4000 | 2.2436 |
| 1.5802 | 0.52 | 5000 | 2.0539 |
| 1.4875 | 0.62 | 6000 | 2.0551 |
| 1.4675 | 0.73 | 7000 | 1.9368 |
| 1.3485 | 0.83 | 8000 | 1.9456 |
| 1.3273 | 0.93 | 9000 | 1.9281 |
| 1.1048 | 1.04 | 10000 | 1.9333 |
| 0.9529 | 1.14 | 11000 | 2.2019 |
| 0.9418 | 1.24 | 12000 | 2.0381 |
| 0.9209 | 1.35 | 13000 | 1.8753 |
| 0.8788 | 1.45 | 14000 | 1.9964 |
| 0.8729 | 1.56 | 15000 | 1.9690 |
| 0.8671 | 1.66 | 16000 | 1.8513 |
| 0.8379 | 1.76 | 17000 | 1.9627 |
| 0.8722 | 1.87 | 18000 | 1.8988 |
| 0.7842 | 1.97 | 19000 | 1.9146 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
typeform/distilroberta-base | 246888851328b937eb2d9c955fd2f74fcf0c4e44 | 2021-01-20T14:23:46.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:openwebtext",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | typeform | null | typeform/distilroberta-base | 6 | null | transformers | 15,418 | ---
language: en
license: apache-2.0
datasets:
- openwebtext
---
# DistilRoBERTa base model
Forked from https://huggingface.co/distilroberta-base
|
uclanlp/plbart-go-en_XX | 5fad2bbc01dd24746f7941a12980bc57bd8db25f | 2021-11-09T17:08:27.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | uclanlp | null | uclanlp/plbart-go-en_XX | 6 | null | transformers | 15,419 | Entry not found |
uclanlp/plbart-php-en_XX | 019e9888cf88e657798f9bb9a95dfaefd1b47563 | 2021-11-09T17:09:15.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | uclanlp | null | uclanlp/plbart-php-en_XX | 6 | null | transformers | 15,420 | Entry not found |
uer/bert-3.9B-chinese-cluecorpussmall | c8f0a2dd76c64a43c9d0c82d966198f4c4d70876 | 2021-12-13T10:50:27.000Z | [
"pytorch",
"megatron-bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | uer | null | uer/bert-3.9B-chinese-cluecorpussmall | 6 | null | transformers | 15,421 | Entry not found |
uer/chinese_roberta_L-10_H-128 | 226739f93bdeee2998a2c2c39add37fb51c5a381 | 2022-07-15T08:14:39.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | uer | null | uer/chinese_roberta_L-10_H-128 | 6 | 1 | transformers | 15,422 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "北京是[MASK]国的首都。"
---
# Chinese RoBERTa Miniatures
## Model description
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | H=128 | H=256 | H=512 | H=768 |
| -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: |
| **L=2** | [**2/128 (Tiny)**][2_128] | [2/256][2_256] | [2/512][2_512] | [2/768][2_768] |
| **L=4** | [4/128][4_128] | [**4/256 (Mini)**][4_256] | [**4/512 (Small)**][4_512] | [4/768][4_768] |
| **L=6** | [6/128][6_128] | [6/256][6_256] | [6/512][6_512] | [6/768][6_768] |
| **L=8** | [8/128][8_128] | [8/256][8_256] | [**8/512 (Medium)**][8_512] | [8/768][8_768] |
| **L=10** | [10/128][10_128] | [10/256][10_256] | [10/512][10_512] | [10/768][10_768] |
| **L=12** | [12/128][12_128] | [12/256][12_256] | [12/512][12_512] | [**12/768 (Base)**][12_768] |
Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 |
| RoBERTa-Mini | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 |
| RoBERTa-Small | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 |
| RoBERTa-Medium | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 |
| RoBERTa-Base | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling (take the case of RoBERTa-Medium):
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-8_H-512')
>>> unmasker("中国的首都是[MASK]京。")
[
{'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
'score': 0.8701988458633423,
'token': 1266,
'token_str': '北'},
{'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]',
'score': 0.1194809079170227,
'token': 1298,
'token_str': '南'},
{'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]',
'score': 0.0037803512532263994,
'token': 691,
'token_str': '东'},
{'sequence': '[CLS] 中 国 的 首 都 是 普 京 。 [SEP]',
'score': 0.0017127094324678183,
'token': 3249,
'token_str': '普'},
{'sequence': '[CLS] 中 国 的 首 都 是 望 京 。 [SEP]',
'score': 0.001687526935711503,
'token': 3307,
'token_str': '望'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = BertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = TFBertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. We found that models pre-trained on CLUECorpusSmall outperform those pre-trained on CLUECorpus2020, although CLUECorpus2020 is much larger than CLUECorpusSmall.
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={Bert: Pre-training of deep bidirectional transformers for language understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/chinese_roberta_L-2_H-128
[2_256]:https://huggingface.co/uer/chinese_roberta_L-2_H-256
[2_512]:https://huggingface.co/uer/chinese_roberta_L-2_H-512
[2_768]:https://huggingface.co/uer/chinese_roberta_L-2_H-768
[4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128
[4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256
[4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512
[4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768
[6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128
[6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256
[6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512
[6_768]:https://huggingface.co/uer/chinese_roberta_L-6_H-768
[8_128]:https://huggingface.co/uer/chinese_roberta_L-8_H-128
[8_256]:https://huggingface.co/uer/chinese_roberta_L-8_H-256
[8_512]:https://huggingface.co/uer/chinese_roberta_L-8_H-512
[8_768]:https://huggingface.co/uer/chinese_roberta_L-8_H-768
[10_128]:https://huggingface.co/uer/chinese_roberta_L-10_H-128
[10_256]:https://huggingface.co/uer/chinese_roberta_L-10_H-256
[10_512]:https://huggingface.co/uer/chinese_roberta_L-10_H-512
[10_768]:https://huggingface.co/uer/chinese_roberta_L-10_H-768
[12_128]:https://huggingface.co/uer/chinese_roberta_L-12_H-128
[12_256]:https://huggingface.co/uer/chinese_roberta_L-12_H-256
[12_512]:https://huggingface.co/uer/chinese_roberta_L-12_H-512
[12_768]:https://huggingface.co/uer/chinese_roberta_L-12_H-768 |
ufal/byt5-small-multilexnorm2021-de | 73da8079205ee703b8df9253ead750e8cf8f20ce | 2021-10-20T12:10:26.000Z | [
"pytorch",
"t5",
"text2text-generation",
"de",
"dataset:mc4",
"dataset:wikipedia",
"dataset:multilexnorm",
"arxiv:2105.13626",
"arxiv:1907.06292",
"transformers",
"lexical normalization",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | ufal | null | ufal/byt5-small-multilexnorm2021-de | 6 | null | transformers | 15,423 | ---
language: de
datasets:
- mc4
- wikipedia
- multilexnorm
tags:
- lexical normalization
license: apache-2.0
---
# Fine-tuned ByT5-small for MultiLexNorm (German version)

This is the official release of the fine-tuned models for **the winning entry** to the [*W-NUT 2021: Multilingual Lexical Normalization (MultiLexNorm)* shared task](https://noisy-text.github.io/2021/multi-lexnorm.html), which evaluates lexical-normalization systems on 12 social media datasets in 11 languages.
Our system is based on [ByT5](https://arxiv.org/abs/2105.13626), which we first pre-train on synthetic data and then fine-tune on authentic normalization data. It achieves the best performance by a wide margin in intrinsic evaluation, and also the best performance in extrinsic evaluation through dependency parsing. In addition to these fine-tuned models, we also release the source files on [GitHub](https://github.com/ufal/multilexnorm2021) and an interactive demo on [Google Colab](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing).
## How to use
The model was *not* fine-tuned in a standard sentence-to-sentence setting – instead, it was tailored to the token-to-token definition of MultiLexNorm data. Please refer to [**the interactive demo on Colab notebook**](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing) to learn how to use these models.
## How to cite
```bibtex
@inproceedings{wnut-ufal,
title= "{ÚFAL} at {MultiLexNorm} 2021: Improving Multilingual Lexical Normalization by Fine-tuning {ByT5}",
author = "Samuel, David and Straka, Milan",
booktitle = "Proceedings of the 7th Workshop on Noisy User-generated Text (W-NUT 2021)",
year = "2021",
publisher = "Association for Computational Linguistics",
address = "Punta Cana, Dominican Republic"
}
```
## ByT5 - Small
ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-small).
ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-small` significantly outperforms [mt5-small](https://huggingface.co/google/mt5-small) on [TweetQA](https://arxiv.org/abs/1907.06292).
Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626)
Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
|
valeriazen/ruT5-base-finetuned-plenka-chatbot-full | ca4a10911ba1c3e3c5e9c5eadfc3c5dbd7fcf5e7 | 2022-01-19T08:54:36.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | valeriazen | null | valeriazen/ruT5-base-finetuned-plenka-chatbot-full | 6 | null | transformers | 15,424 | Entry not found |
vesteinn/XLMr-ENIS-QA-Is | 0eadc67de13e6f73d41577b55dfd281270e9c78d | 2022-02-17T22:07:24.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"is",
"dataset:ic3",
"dataset:igc",
"transformers",
"icelandic",
"qa",
"autotrain_compatible"
]
| question-answering | false | vesteinn | null | vesteinn/XLMr-ENIS-QA-Is | 6 | null | transformers | 15,425 | ---
language:
- is
thumbnail:
tags:
- icelandic
- qa
datasets:
- ic3
- igc
metrics:
- em
- f1
widget:
- text: "Hvenær var Halldór Laxness í menntaskóla ?"
context: "Halldór Laxness ( Halldór Kiljan ) fæddist í Reykjavík 23. apríl árið 1902 og átti í fyrstu heima við Laugaveg en árið 1905 settist fjölskyldan að í Laxnesi í Mosfellssveit . Þar ólst Halldór upp en sótti skóla í Reykjavík á unglingsárum . Ungur hélt hann síðan utan og var langdvölum erlendis um árabil – í ýmsum Evrópulöndum og síðar í Ameríku . Þegar hann var heima bjó hann í Reykjavík þar til hann og kona hans , Auður Sveinsdóttir , byggðu sér húsið Gljúfrastein í Mosfellssveit og fluttu þangað árið 1945 . Þar var heimili þeirra alla tíð síðan og þar er nú safn til minningar um þau . Halldór lést 8. febrúar 1998 . Skólaganga Halldórs varð ekki löng . Árið 1918 hóf hann nám við Menntaskólann í Reykjavík en hafði lítinn tíma til að læra , enda var hann að skrifa skáldsögu , Barn náttúrunnar , sem kom út haustið 1919 – þá þegar var höfundurinn ungi farinn af landi brott . Sagan vakti þó nokkra athygli og í Alþýðublaðinu sagði m.a. : „ Og hver veit nema að Halldór frá Laxnesi eigi eftir að verða óskabarn íslensku þjóðarinnar . “ Upp frá þessu sendi Halldór frá sér bók nánast á hverju ári , stundum fleiri en eina , í yfir sex áratugi . Afköst hans voru með eindæmum ; hann skrifaði fjölda skáldsagna , sumar í nokkrum hlutum , leikrit , kvæði , smásagnasöfn og endurminningabækur og gaf auk þess út mörg greinasöfn og ritgerðir . Bækurnar eru fjölbreyttar en eiga það sameiginlegt að vera skrifaðar af einstakri stílgáfu , djúpum mannskilningi og víðtækri þekkingu á sögu og samfélagi . Þar birtast oft afgerandi skoðanir á þjóðfélagsmálum og sögupersónur eru margar einkar eftirminnilegar ; tilsvör þeirra og lunderni hafa orðið samofin þjóðarsálinni . Þekktustu verk Halldórs eru eflaust skáldsögurnar stóru og rismiklu , s.s. Salka Valka , Sjálfstætt fólk , Heimsljós , Íslandsklukkan og Gerpla , og raunar mætti telja upp mun fleiri ; Kvæðabók hans er í uppáhaldi hjá mörgum sem og minningabækurnar sem hann skrifaði á efri árum um æskuár sín ; af þekktum greinasöfnum og ritgerðum má nefna Alþýðubókina og Skáldatíma . Mikið hefur verið skrifað um verk og ævi skáldsins , en hér skal aðeins bent á ítarlega frásögn og greiningu Halldórs Guðmundssonar í bókinni Halldór Laxness – ævisaga ."
---
# XLMr-ENIS-QA-Is
## Model description
This is an Icelandic reading comprehension Q&A model.
## Intended uses & limitations
This model is part of my MSc thesis about Q&A for Icelandic.
#### How to use
```python
```
#### Limitations and bias
## Training data
Translated English datasets were used along with the Natural Questions in Icelandic dataset.
## Training procedure
## Eval results
### BibTeX entry and citation info
```bibtex
```
|
vidhur2k/mBERT-German-Mono | a90080b2c90c631e2fd6e5212fbba343779a52a6 | 2021-12-02T20:16:31.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | vidhur2k | null | vidhur2k/mBERT-German-Mono | 6 | null | transformers | 15,426 | Entry not found |
vitouphy/wav2vec2-xls-r-300m-japanese | 6ca1b5ac146d9553b6ab128c56af46623f5d6fbe | 2022-03-23T18:30:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | vitouphy | null | vitouphy/wav2vec2-xls-r-300m-japanese | 6 | null | transformers | 15,427 | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- ja
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Japanese
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ja
metrics:
- name: Test WER
type: wer
value: 54.05
- name: Test CER
type: cer
value: 27.54
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ja
metrics:
- name: Validation WER
type: wer
value: 48.77
- name: Validation CER
type: cer
value: 24.87
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ja
metrics:
- name: Test CER
type: cer
value: 27.36
---
#
This model is for transcribing audio into Hiragana, one format of Japanese language.
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the `mozilla-foundation/common_voice_8_0 dataset`. Note that the following results are achieved by:
- Modify `eval.py` to suit the use case.
- Since kanji and katakana shares the same sound as hiragana, we convert all texts to hiragana using [pykakasi](https://pykakasi.readthedocs.io) and tokenize them using [fugashi](https://github.com/polm/fugashi).
It achieves the following results on the evaluation set:
- Loss: 0.7751
- Cer: 0.2227
# Evaluation results (Running ./eval.py):
| Model | Metric | Common-Voice-8/test | speech-recognition-community-v2/dev-data |
|:--------:|:------:|:-------------------:|:------------------------------------------:|
| w/o LM | WER | 0.5964 | 0.5532 |
| | CER | 0.2944 | 0.2629 |
| w/ LM | WER | 0.5405 | 0.4877 |
| | CER | **0.2754** | **0.2487** |
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.4081 | 1.6 | 500 | 4.0983 | 1.0 |
| 3.303 | 3.19 | 1000 | 3.3563 | 1.0 |
| 3.1538 | 4.79 | 1500 | 3.2066 | 0.9239 |
| 2.1526 | 6.39 | 2000 | 1.1597 | 0.3355 |
| 1.8726 | 7.98 | 2500 | 0.9023 | 0.2505 |
| 1.7817 | 9.58 | 3000 | 0.8219 | 0.2334 |
| 1.7488 | 11.18 | 3500 | 0.7915 | 0.2222 |
| 1.7039 | 12.78 | 4000 | 0.7751 | 0.2227 |
| Stop & Train | | | | |
| 1.6571 | 15.97 | 5000 | 0.6788 | 0.1685 |
| 1.520400 | 19.16 | 6000 | 0.6095 | 0.1409 |
| 1.448200 | 22.35 | 7000 | 0.5843 | 0.1430 |
| 1.385400 | 25.54 | 8000 | 0.5699 | 0.1263 |
| 1.354200 | 28.73 | 9000 | 0.5686 | 0.1219 |
| 1.331500 | 31.92 | 10000 | 0.5502 | 0.1144 |
| 1.290800 | 35.11 | 11000 | 0.5371 | 0.1140 |
| Stop & Train | | | | |
| 1.235200 | 38.30 | 12000 | 0.5394 | 0.1106 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
vitvit/xlm-roberta-base-finetuned-heb_HebrewSentiment | 3fc65317a1f702bb739288092a0cff057e2bac8e | 2021-09-19T06:52:56.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| token-classification | false | vitvit | null | vitvit/xlm-roberta-base-finetuned-heb_HebrewSentiment | 6 | null | transformers | 15,428 | ---
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: xlm-roberta-base-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: he
metric:
name: Accuracy
type: accuracy
value: 0.9449884563330945
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-ner
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5647
- Precision: 0.8684
- Recall: 0.8656
- F1: 0.8670
- Accuracy: 0.9450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3537 | 1.0 | 6667 | 0.3621 | 0.7951 | 0.8054 | 0.8002 | 0.9187 |
| 0.2468 | 2.0 | 13334 | 0.3024 | 0.8341 | 0.8451 | 0.8396 | 0.9359 |
| 0.1705 | 3.0 | 20001 | 0.3255 | 0.8401 | 0.8328 | 0.8364 | 0.9365 |
| 0.1388 | 4.0 | 26668 | 0.3530 | 0.8438 | 0.8527 | 0.8482 | 0.9389 |
| 0.0979 | 5.0 | 33335 | 0.3980 | 0.8445 | 0.8542 | 0.8494 | 0.9390 |
| 0.0946 | 6.0 | 40002 | 0.3863 | 0.8500 | 0.8622 | 0.8560 | 0.9426 |
| 0.0908 | 7.0 | 46669 | 0.3991 | 0.8519 | 0.8633 | 0.8576 | 0.9420 |
| 0.0712 | 8.0 | 53336 | 0.4065 | 0.8617 | 0.8551 | 0.8584 | 0.9424 |
| 0.0568 | 9.0 | 60003 | 0.4348 | 0.8441 | 0.8663 | 0.8551 | 0.9413 |
| 0.0448 | 10.0 | 66670 | 0.4661 | 0.8429 | 0.8603 | 0.8515 | 0.9404 |
| 0.0687 | 11.0 | 73337 | 0.4482 | 0.8561 | 0.8621 | 0.8591 | 0.9431 |
| 0.0552 | 12.0 | 80004 | 0.4527 | 0.8499 | 0.8619 | 0.8558 | 0.9405 |
| 0.059 | 13.0 | 86671 | 0.4688 | 0.8564 | 0.8592 | 0.8578 | 0.9428 |
| 0.0362 | 14.0 | 93338 | 0.4593 | 0.8705 | 0.8615 | 0.8660 | 0.9451 |
| 0.0407 | 15.0 | 100005 | 0.4661 | 0.8647 | 0.8674 | 0.8660 | 0.9449 |
| 0.0278 | 16.0 | 106672 | 0.4794 | 0.8670 | 0.8707 | 0.8688 | 0.9457 |
| 0.0425 | 17.0 | 113339 | 0.5056 | 0.8548 | 0.8698 | 0.8622 | 0.9440 |
| 0.0251 | 18.0 | 120006 | 0.4630 | 0.8658 | 0.8603 | 0.8630 | 0.9442 |
| 0.0207 | 19.0 | 126673 | 0.5077 | 0.8515 | 0.8574 | 0.8544 | 0.9420 |
| 0.0245 | 20.0 | 133340 | 0.5130 | 0.8630 | 0.8646 | 0.8638 | 0.9437 |
| 0.051 | 21.0 | 140007 | 0.5233 | 0.8578 | 0.8644 | 0.8611 | 0.9423 |
| 0.0381 | 22.0 | 146674 | 0.5269 | 0.8688 | 0.8635 | 0.8661 | 0.9433 |
| 0.0144 | 23.0 | 153341 | 0.5137 | 0.8572 | 0.8668 | 0.8620 | 0.9443 |
| 0.0237 | 24.0 | 160008 | 0.5121 | 0.8741 | 0.8552 | 0.8645 | 0.9443 |
| 0.0175 | 25.0 | 166675 | 0.5019 | 0.8665 | 0.8725 | 0.8695 | 0.9467 |
| 0.0268 | 26.0 | 173342 | 0.5247 | 0.8597 | 0.8696 | 0.8646 | 0.9433 |
| 0.0128 | 27.0 | 180009 | 0.5075 | 0.8696 | 0.8704 | 0.8700 | 0.9461 |
| 0.0299 | 28.0 | 186676 | 0.5066 | 0.8647 | 0.8636 | 0.8641 | 0.9444 |
| 0.018 | 29.0 | 193343 | 0.5421 | 0.8677 | 0.8609 | 0.8643 | 0.9432 |
| 0.0264 | 30.0 | 200010 | 0.5023 | 0.8479 | 0.8731 | 0.8603 | 0.9424 |
| 0.0169 | 31.0 | 206677 | 0.5215 | 0.8672 | 0.8653 | 0.8662 | 0.9435 |
| 0.0185 | 32.0 | 213344 | 0.5184 | 0.8698 | 0.8630 | 0.8664 | 0.9457 |
| 0.0159 | 33.0 | 220011 | 0.4930 | 0.8653 | 0.8662 | 0.8657 | 0.9448 |
| 0.026 | 34.0 | 226678 | 0.4976 | 0.8579 | 0.8794 | 0.8685 | 0.9456 |
| 0.016 | 35.0 | 233345 | 0.5671 | 0.8517 | 0.8689 | 0.8602 | 0.9421 |
| 0.0186 | 36.0 | 240012 | 0.4881 | 0.8706 | 0.8752 | 0.8729 | 0.9467 |
| 0.0253 | 37.0 | 246679 | 0.5351 | 0.8621 | 0.8725 | 0.8673 | 0.9447 |
| 0.0086 | 38.0 | 253346 | 0.5759 | 0.8742 | 0.8612 | 0.8677 | 0.9440 |
| 0.0157 | 39.0 | 260013 | 0.5362 | 0.8549 | 0.8696 | 0.8622 | 0.9436 |
| 0.0107 | 40.0 | 266680 | 0.5734 | 0.8730 | 0.8582 | 0.8655 | 0.9438 |
| 0.0139 | 41.0 | 273347 | 0.4995 | 0.8622 | 0.8729 | 0.8675 | 0.9457 |
| 0.0141 | 42.0 | 280014 | 0.5567 | 0.8651 | 0.8671 | 0.8661 | 0.9448 |
| 0.0146 | 43.0 | 286681 | 0.5124 | 0.8673 | 0.8691 | 0.8682 | 0.9460 |
| 0.0125 | 44.0 | 293348 | 0.5511 | 0.8568 | 0.8758 | 0.8662 | 0.9440 |
| 0.0153 | 45.0 | 300015 | 0.5385 | 0.8597 | 0.8720 | 0.8658 | 0.9445 |
| 0.017 | 46.0 | 306682 | 0.5302 | 0.8633 | 0.8714 | 0.8673 | 0.9448 |
| 0.0121 | 47.0 | 313349 | 0.5302 | 0.8604 | 0.8666 | 0.8635 | 0.9441 |
| 0.0136 | 48.0 | 320016 | 0.5639 | 0.8481 | 0.8677 | 0.8578 | 0.9404 |
| 0.0107 | 49.0 | 326683 | 0.5403 | 0.8731 | 0.8648 | 0.8689 | 0.9457 |
| 0.0083 | 50.0 | 333350 | 0.5615 | 0.8770 | 0.8581 | 0.8675 | 0.9431 |
| 0.0121 | 51.0 | 340017 | 0.5489 | 0.8512 | 0.8730 | 0.8620 | 0.9439 |
| 0.0079 | 52.0 | 346684 | 0.5328 | 0.8599 | 0.8736 | 0.8667 | 0.9458 |
| 0.0139 | 53.0 | 353351 | 0.5572 | 0.8665 | 0.8631 | 0.8648 | 0.9441 |
| 0.0138 | 54.0 | 360018 | 0.5128 | 0.8662 | 0.8740 | 0.8701 | 0.9468 |
| 0.014 | 55.0 | 366685 | 0.5603 | 0.8798 | 0.8662 | 0.8730 | 0.9460 |
| 0.0319 | 56.0 | 373352 | 0.5508 | 0.8631 | 0.8688 | 0.8659 | 0.9427 |
| 0.0152 | 57.0 | 380019 | 0.5716 | 0.8596 | 0.8644 | 0.8620 | 0.9429 |
| 0.0249 | 58.0 | 386686 | 0.5692 | 0.8595 | 0.8749 | 0.8671 | 0.9453 |
| 0.0161 | 59.0 | 393353 | 0.5483 | 0.8665 | 0.8715 | 0.8690 | 0.9463 |
| 0.0157 | 60.0 | 400020 | 0.5588 | 0.8603 | 0.8800 | 0.8701 | 0.9463 |
| 0.0247 | 61.0 | 406687 | 0.5265 | 0.8510 | 0.8662 | 0.8585 | 0.9417 |
| 0.0069 | 62.0 | 413354 | 0.5578 | 0.8681 | 0.8679 | 0.8680 | 0.9459 |
| 0.0254 | 63.0 | 420021 | 0.5756 | 0.8620 | 0.8646 | 0.8633 | 0.9435 |
| 0.0182 | 64.0 | 426688 | 0.5323 | 0.8651 | 0.8762 | 0.8707 | 0.9458 |
| 0.0237 | 65.0 | 433355 | 0.5342 | 0.8592 | 0.8724 | 0.8657 | 0.9443 |
| 0.0234 | 66.0 | 440022 | 0.5458 | 0.8653 | 0.8679 | 0.8666 | 0.9437 |
| 0.0159 | 67.0 | 446689 | 0.5166 | 0.8781 | 0.8624 | 0.8702 | 0.9448 |
| 0.0204 | 68.0 | 453356 | 0.5499 | 0.8658 | 0.8723 | 0.8690 | 0.9452 |
| 0.0117 | 69.0 | 460023 | 0.5573 | 0.8572 | 0.8714 | 0.8642 | 0.9432 |
| 0.0062 | 70.0 | 466690 | 0.5887 | 0.8592 | 0.8675 | 0.8633 | 0.9422 |
| 0.0123 | 71.0 | 473357 | 0.5138 | 0.8600 | 0.8699 | 0.8649 | 0.9448 |
| 0.0079 | 72.0 | 480024 | 0.5548 | 0.8610 | 0.8724 | 0.8666 | 0.9447 |
| 0.0061 | 73.0 | 486691 | 0.5872 | 0.8476 | 0.8675 | 0.8574 | 0.9415 |
| 0.0129 | 74.0 | 493358 | 0.5520 | 0.8727 | 0.8595 | 0.8661 | 0.9449 |
| 0.0159 | 75.0 | 500025 | 0.5427 | 0.8611 | 0.8674 | 0.8642 | 0.9435 |
| 0.0258 | 76.0 | 506692 | 0.5402 | 0.8672 | 0.8702 | 0.8687 | 0.9448 |
| 0.0151 | 77.0 | 513359 | 0.5589 | 0.8681 | 0.8704 | 0.8693 | 0.9457 |
| 0.0075 | 78.0 | 520026 | 0.5754 | 0.8613 | 0.8682 | 0.8647 | 0.9438 |
| 0.0076 | 79.0 | 526693 | 0.5709 | 0.8608 | 0.8646 | 0.8627 | 0.9445 |
| 0.0196 | 80.0 | 533360 | 0.5252 | 0.8714 | 0.8706 | 0.8710 | 0.9461 |
| 0.0123 | 81.0 | 540027 | 0.5857 | 0.8637 | 0.8631 | 0.8634 | 0.9437 |
| 0.0205 | 82.0 | 546694 | 0.5805 | 0.8642 | 0.8655 | 0.8648 | 0.9431 |
| 0.0065 | 83.0 | 553361 | 0.5815 | 0.8619 | 0.8626 | 0.8622 | 0.9431 |
| 0.0128 | 84.0 | 560028 | 0.6305 | 0.8498 | 0.8646 | 0.8571 | 0.9402 |
| 0.0118 | 85.0 | 566695 | 0.5620 | 0.8648 | 0.8682 | 0.8665 | 0.9445 |
| 0.0173 | 86.0 | 573362 | 0.5714 | 0.8655 | 0.8657 | 0.8656 | 0.9442 |
| 0.0107 | 87.0 | 580029 | 0.5845 | 0.8603 | 0.8649 | 0.8626 | 0.9418 |
| 0.0218 | 88.0 | 586696 | 0.5259 | 0.8708 | 0.8697 | 0.8703 | 0.9449 |
| 0.0039 | 89.0 | 593363 | 0.5809 | 0.8800 | 0.8648 | 0.8723 | 0.9465 |
| 0.0076 | 90.0 | 600030 | 0.5852 | 0.8744 | 0.8615 | 0.8679 | 0.9443 |
| 0.008 | 91.0 | 606697 | 0.5540 | 0.8689 | 0.8683 | 0.8686 | 0.9454 |
| 0.0114 | 92.0 | 613364 | 0.5836 | 0.8578 | 0.8639 | 0.8609 | 0.9422 |
| 0.0245 | 93.0 | 620031 | 0.5808 | 0.8735 | 0.8672 | 0.8703 | 0.9450 |
| 0.0142 | 94.0 | 626698 | 0.5846 | 0.8630 | 0.8692 | 0.8661 | 0.9429 |
| 0.0013 | 95.0 | 633365 | 0.5495 | 0.8656 | 0.8605 | 0.8630 | 0.9432 |
| 0.0093 | 96.0 | 640032 | 0.6049 | 0.8660 | 0.8656 | 0.8658 | 0.9436 |
| 0.012 | 97.0 | 646699 | 0.5802 | 0.8633 | 0.8618 | 0.8626 | 0.9427 |
| 0.0042 | 98.0 | 653366 | 0.5851 | 0.8571 | 0.8658 | 0.8615 | 0.9422 |
| 0.0143 | 99.0 | 660033 | 0.5619 | 0.8671 | 0.8626 | 0.8649 | 0.9437 |
| 0.0173 | 100.0 | 666700 | 0.5647 | 0.8684 | 0.8656 | 0.8670 | 0.9450 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu111
- Datasets 1.11.0
- Tokenizers 0.10.3
|
vkk1710/xlnet-base-cased-finetuned-qqp | 0c2bd74668b9c9e1c9072c8a7cd36c2b53b6cd5e | 2021-11-15T19:25:06.000Z | [
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | vkk1710 | null | vkk1710/xlnet-base-cased-finetuned-qqp | 6 | null | transformers | 15,429 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: xlnet-base-cased-finetuned-qqp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-finetuned-qqp
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the qqp dataset (part of glue dataset).
It achieves the following results on the evaluation set:
- eval_loss: 0.27
- eval_accuracy: 0.9084
- eval_f1: 0.8775
- epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- weight_decay: 0.01
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
voidful/unifiedqg-bart-base | 84f3707b6a137c8a1bbfaba06001ba608307f1cc | 2021-12-09T12:32:45.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:unifiedQA",
"transformers",
"question",
"generation",
"seq2seq",
"autotrain_compatible"
]
| text2text-generation | false | voidful | null | voidful/unifiedqg-bart-base | 6 | null | transformers | 15,430 | ---
language: en
tags:
- bart
- question
- generation
- seq2seq
datasets:
- unifiedQA
metrics:
- bleu
- rouge
pipeline_tag: text2text-generation
widget:
- text: "treehouses in france. \n When you ' re having a holiday , one of the main questions to ask is which hotel or apartment to choose . However , when it comes to France , you have another special choice : treehouses . In France , treehouses are offered to travelers as a new choice in many places . The price may be a little higher , but you do have a chance to _ your childhood memories . Alain Laurens , one of France ' s top treehouse designers , said , ' Most of the people might have the experience of building a den when they were young . And they like that feeling of freedom when they are children . ' Its fairy - tale style gives travelers a special feeling . It seems as if they are living as a forest king and enjoying the fresh air in the morning . Another kind of treehouse is the ' star cube ' . It gives travelers the chance of looking at the stars shining in the sky when they are going to sleep . Each ' star cube ' not only offers all the comfortable things that a hotel provides for travelers , but also gives them a chance to look for stars by using a telescope . The glass roof allows you to look at the stars from your bed ."
---
# unifiedqg-bart-base
## Model description
This model is a sequence-to-sequence question generator which takes an answer and context as an input, and generates a question as an output.
It is based on a pretrained `bart-base` model.
#### How to use
The model takes concatenated context and answers as an input sequence, and will generate a full distractor sentence as an output sequence. The max sequence length is 1024 tokens. Inputs should be organised into the following format:
```
answer \n context
```
The input sequence can then be encoded and passed as the `input_ids` argument in the model's `generate()` method.
|
vvn/en-to-dutch-marianmt | 87cec79915fb1713db7ac4fab21e2869eaa30503 | 2021-07-31T13:02:40.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | vvn | null | vvn/en-to-dutch-marianmt | 6 | null | transformers | 15,431 | Fine-Tuned MarianMT translation model for translating text from English to Dutch. Checkpoint of pre-trained model = Helsinki-NLP/opus-mt-en-nl.
Trained using custom training loop with PyTorch on Colab for 2 epochs. Link to the GitHub repo containing Google Colab notebook: https://github.com/vanadnarayane26/Maverick_2.0_Translation_layer/blob/main/Eng_to_dutch_marianmt.ipynb
|
vxvxx/t5-small-finetuned-no_paragraph-to-paragraph | 8b59ae67f8a36c5488ce4541d36ed46becddb791 | 2022-02-15T23:01:34.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | vxvxx | null | vxvxx/t5-small-finetuned-no_paragraph-to-paragraph | 6 | null | transformers | 15,432 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-small-finetuned-no_paragraph-to-paragraph
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-no_paragraph-to-paragraph
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0713
- Bleu: 0.0
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:----:|:-------:|
| 0.767 | 1.0 | 576 | 0.0713 | 0.0 | 19.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
wandemberg-eld/opus-mt-en-de-finetuned-en-to-de | 39c033631e888b33e9476d57c4b3ecaff527183d | 2021-12-01T12:49:07.000Z | [
"pytorch",
"marian",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | wandemberg-eld | null | wandemberg-eld/opus-mt-en-de-finetuned-en-to-de | 6 | null | transformers | 15,433 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-de-finetuned-en-to-de
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: de-en
metrics:
- name: Bleu
type: bleu
value: 29.4312
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-de-finetuned-en-to-de
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4083
- Bleu: 29.4312
- Gen Len: 24.746
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|
| 1.978 | 1.0 | 568611 | 1.4083 | 29.4312 | 24.746 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
wangyuwei/bert_cn_finetuning | cadd317823258172b82e7c46a942fb7bb79a9080 | 2021-05-20T09:05:36.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | wangyuwei | null | wangyuwei/bert_cn_finetuning | 6 | null | transformers | 15,434 | Entry not found |
whher/german-gpt2-romantik | 761c9aab14352853087a5a67540b7eb74f632cfa | 2021-08-25T19:21:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | whher | null | whher/german-gpt2-romantik | 6 | null | transformers | 15,435 | Model Description
------
The german-gpt2-romantik model was fine-tuned on [dbmdz's german gpt-2](https://huggingface.co/dbmdz/german-gpt2 "dbmdz's german-gpt2") for specialization in poetry generation tasks.
Training Data
------
The data for training were hand-chosen poems from the German Romanticism Era (German: *Romantik*). In total there were 2,641 pieces of poems and 879,427 tokens in the corpus.
Poem Generation
------
Enter a starting sentence or phrase (also with the Inference API on the right), the model will output poem-like texts. You can try by entering "Der Garten der Freude", which outputs:
"Der Garten der Freude,
in dem mein Auge ruht,
wo Gott und die Sonne,
hier im Himmel,
zu allen Zeiten uns umgeben."
|
wietsedv/bert-base-dutch-cased-finetuned-udlassy-ner | bb98dd2895a842369a146c76363a8c7a388cb17b | 2021-05-20T09:10:49.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | wietsedv | null | wietsedv/bert-base-dutch-cased-finetuned-udlassy-ner | 6 | null | transformers | 15,436 | Entry not found |
wilsontam/gpt2-dstc9 | c0c1a2fa66a3f2d8c71f698b6c89185a4bc1d6c2 | 2021-12-26T14:02:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"dstc9"
]
| text-generation | false | wilsontam | null | wilsontam/gpt2-dstc9 | 6 | null | transformers | 15,437 | ---
language: "en"
tags:
- dstc9
widget:
- text: "Yes, I'm going to be in Chinatown, San Francisco and am looking"
- text: "Can you find me one that is in the"
---
This GPT2 model is trained using DSTC9 data for dialogue modeling purpose.
Data link: https://github.com/alexa/alexa-with-dstc9-track1-dataset
Credit: Jia-Chen Jason Gu, Wilson Tam
|
xkang/bert-finetuned-ner-accelerate | ab8e03824261a5650152517960a3dc2ff75ff4f0 | 2021-12-21T07:50:19.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | xkang | null | xkang/bert-finetuned-ner-accelerate | 6 | null | transformers | 15,438 | Entry not found |
zhuqing/bert-base-uncased-netmums-feminist | 464ab510083a18692a2be006bdd6ae3e1d4c69b7 | 2021-08-13T09:20:52.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | zhuqing | null | zhuqing/bert-base-uncased-netmums-feminist | 6 | null | transformers | 15,439 | Entry not found |
zhuqing/comparison-roberta-base-uncased-netmums-feminist | a5759560e32e857034f29471bd80a44027e897cd | 2021-08-20T05:17:51.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | zhuqing | null | zhuqing/comparison-roberta-base-uncased-netmums-feminist | 6 | null | transformers | 15,440 | Entry not found |
zhuqing/v1-theme2 | c6682189164ba4c4ca3bf4b15241117de94028c3 | 2021-07-07T16:02:21.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | zhuqing | null | zhuqing/v1-theme2 | 6 | null | transformers | 15,441 | Entry not found |
zoeymeng913/bert_cn_finetuning | 9e3a1d5d1540235e17fe4774f77a7e4bac29a99a | 2021-05-20T09:54:41.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | zoeymeng913 | null | zoeymeng913/bert_cn_finetuning | 6 | null | transformers | 15,442 | Entry not found |
wietsedv/xlm-roberta-base-ft-udpos28-nl | 406757ea8fd1b72a73fb3b6e804a61e350d0ffcb | 2022-02-25T09:59:07.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"nl",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-nl | 6 | null | transformers | 15,443 |
---
language:
- nl
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-nl
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 88.8
- type: accuracy
name: Dutch Test accuracy
value: 97.0
- type: accuracy
name: German Test accuracy
value: 89.0
- type: accuracy
name: Italian Test accuracy
value: 89.9
- type: accuracy
name: French Test accuracy
value: 88.1
- type: accuracy
name: Spanish Test accuracy
value: 90.5
- type: accuracy
name: Russian Test accuracy
value: 89.2
- type: accuracy
name: Swedish Test accuracy
value: 90.7
- type: accuracy
name: Norwegian Test accuracy
value: 87.6
- type: accuracy
name: Danish Test accuracy
value: 89.0
- type: accuracy
name: Low Saxon Test accuracy
value: 58.3
- type: accuracy
name: Akkadian Test accuracy
value: 22.9
- type: accuracy
name: Armenian Test accuracy
value: 86.7
- type: accuracy
name: Welsh Test accuracy
value: 70.2
- type: accuracy
name: Old East Slavic Test accuracy
value: 73.5
- type: accuracy
name: Albanian Test accuracy
value: 78.9
- type: accuracy
name: Slovenian Test accuracy
value: 76.3
- type: accuracy
name: Guajajara Test accuracy
value: 22.1
- type: accuracy
name: Kurmanji Test accuracy
value: 78.3
- type: accuracy
name: Turkish Test accuracy
value: 78.3
- type: accuracy
name: Finnish Test accuracy
value: 86.2
- type: accuracy
name: Indonesian Test accuracy
value: 85.4
- type: accuracy
name: Ukrainian Test accuracy
value: 85.8
- type: accuracy
name: Polish Test accuracy
value: 86.3
- type: accuracy
name: Portuguese Test accuracy
value: 90.0
- type: accuracy
name: Kazakh Test accuracy
value: 83.0
- type: accuracy
name: Latin Test accuracy
value: 79.0
- type: accuracy
name: Old French Test accuracy
value: 53.1
- type: accuracy
name: Buryat Test accuracy
value: 58.4
- type: accuracy
name: Kaapor Test accuracy
value: 13.8
- type: accuracy
name: Korean Test accuracy
value: 62.2
- type: accuracy
name: Estonian Test accuracy
value: 87.6
- type: accuracy
name: Croatian Test accuracy
value: 87.6
- type: accuracy
name: Gothic Test accuracy
value: 16.5
- type: accuracy
name: Swiss German Test accuracy
value: 48.3
- type: accuracy
name: Assyrian Test accuracy
value: 14.6
- type: accuracy
name: North Sami Test accuracy
value: 36.5
- type: accuracy
name: Naija Test accuracy
value: 36.0
- type: accuracy
name: Latvian Test accuracy
value: 86.6
- type: accuracy
name: Chinese Test accuracy
value: 47.9
- type: accuracy
name: Tagalog Test accuracy
value: 73.9
- type: accuracy
name: Bambara Test accuracy
value: 29.7
- type: accuracy
name: Lithuanian Test accuracy
value: 85.7
- type: accuracy
name: Galician Test accuracy
value: 87.4
- type: accuracy
name: Vietnamese Test accuracy
value: 65.1
- type: accuracy
name: Greek Test accuracy
value: 86.3
- type: accuracy
name: Catalan Test accuracy
value: 89.5
- type: accuracy
name: Czech Test accuracy
value: 87.3
- type: accuracy
name: Erzya Test accuracy
value: 43.0
- type: accuracy
name: Bhojpuri Test accuracy
value: 48.5
- type: accuracy
name: Thai Test accuracy
value: 58.1
- type: accuracy
name: Marathi Test accuracy
value: 87.7
- type: accuracy
name: Basque Test accuracy
value: 78.2
- type: accuracy
name: Slovak Test accuracy
value: 88.2
- type: accuracy
name: Kiche Test accuracy
value: 28.2
- type: accuracy
name: Yoruba Test accuracy
value: 19.5
- type: accuracy
name: Warlpiri Test accuracy
value: 27.9
- type: accuracy
name: Tamil Test accuracy
value: 84.3
- type: accuracy
name: Maltese Test accuracy
value: 19.2
- type: accuracy
name: Ancient Greek Test accuracy
value: 66.3
- type: accuracy
name: Icelandic Test accuracy
value: 84.3
- type: accuracy
name: Mbya Guarani Test accuracy
value: 25.6
- type: accuracy
name: Urdu Test accuracy
value: 68.5
- type: accuracy
name: Romanian Test accuracy
value: 83.8
- type: accuracy
name: Persian Test accuracy
value: 78.3
- type: accuracy
name: Apurina Test accuracy
value: 27.3
- type: accuracy
name: Japanese Test accuracy
value: 34.1
- type: accuracy
name: Hungarian Test accuracy
value: 87.2
- type: accuracy
name: Hindi Test accuracy
value: 73.3
- type: accuracy
name: Classical Chinese Test accuracy
value: 28.3
- type: accuracy
name: Komi Permyak Test accuracy
value: 45.1
- type: accuracy
name: Faroese Test accuracy
value: 78.3
- type: accuracy
name: Sanskrit Test accuracy
value: 30.3
- type: accuracy
name: Livvi Test accuracy
value: 63.1
- type: accuracy
name: Arabic Test accuracy
value: 80.0
- type: accuracy
name: Wolof Test accuracy
value: 27.7
- type: accuracy
name: Bulgarian Test accuracy
value: 89.2
- type: accuracy
name: Akuntsu Test accuracy
value: 28.0
- type: accuracy
name: Makurap Test accuracy
value: 7.5
- type: accuracy
name: Kangri Test accuracy
value: 44.9
- type: accuracy
name: Breton Test accuracy
value: 65.8
- type: accuracy
name: Telugu Test accuracy
value: 85.7
- type: accuracy
name: Cantonese Test accuracy
value: 50.7
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 49.4
- type: accuracy
name: Karelian Test accuracy
value: 73.5
- type: accuracy
name: Upper Sorbian Test accuracy
value: 70.9
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 64.8
- type: accuracy
name: Komi Zyrian Test accuracy
value: 37.1
- type: accuracy
name: Irish Test accuracy
value: 68.9
- type: accuracy
name: Nayini Test accuracy
value: 46.2
- type: accuracy
name: Munduruku Test accuracy
value: 12.3
- type: accuracy
name: Manx Test accuracy
value: 35.7
- type: accuracy
name: Skolt Sami Test accuracy
value: 30.1
- type: accuracy
name: Afrikaans Test accuracy
value: 88.4
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 24.9
- type: accuracy
name: Belarusian Test accuracy
value: 87.2
- type: accuracy
name: Serbian Test accuracy
value: 89.0
- type: accuracy
name: Moksha Test accuracy
value: 41.5
- type: accuracy
name: Western Armenian Test accuracy
value: 79.0
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 59.5
- type: accuracy
name: Khunsari Test accuracy
value: 40.5
- type: accuracy
name: Hebrew Test accuracy
value: 94.8
- type: accuracy
name: Uyghur Test accuracy
value: 77.2
- type: accuracy
name: Chukchi Test accuracy
value: 30.5
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Dutch
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-nl")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-nl")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-vi | 6a4a46823a9de1d5b279b834c8f216cd23a6863d | 2022-02-25T09:59:37.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"vi",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-vi | 6 | null | transformers | 15,444 |
---
language:
- vi
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-vi
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 57.2
- type: accuracy
name: Dutch Test accuracy
value: 58.4
- type: accuracy
name: German Test accuracy
value: 57.7
- type: accuracy
name: Italian Test accuracy
value: 57.3
- type: accuracy
name: French Test accuracy
value: 53.8
- type: accuracy
name: Spanish Test accuracy
value: 58.7
- type: accuracy
name: Russian Test accuracy
value: 66.9
- type: accuracy
name: Swedish Test accuracy
value: 59.3
- type: accuracy
name: Norwegian Test accuracy
value: 56.7
- type: accuracy
name: Danish Test accuracy
value: 59.3
- type: accuracy
name: Low Saxon Test accuracy
value: 40.3
- type: accuracy
name: Akkadian Test accuracy
value: 34.0
- type: accuracy
name: Armenian Test accuracy
value: 62.9
- type: accuracy
name: Welsh Test accuracy
value: 50.9
- type: accuracy
name: Old East Slavic Test accuracy
value: 54.9
- type: accuracy
name: Albanian Test accuracy
value: 57.0
- type: accuracy
name: Slovenian Test accuracy
value: 53.5
- type: accuracy
name: Guajajara Test accuracy
value: 36.6
- type: accuracy
name: Kurmanji Test accuracy
value: 58.5
- type: accuracy
name: Turkish Test accuracy
value: 61.7
- type: accuracy
name: Finnish Test accuracy
value: 60.2
- type: accuracy
name: Indonesian Test accuracy
value: 62.7
- type: accuracy
name: Ukrainian Test accuracy
value: 66.1
- type: accuracy
name: Polish Test accuracy
value: 65.1
- type: accuracy
name: Portuguese Test accuracy
value: 64.5
- type: accuracy
name: Kazakh Test accuracy
value: 70.5
- type: accuracy
name: Latin Test accuracy
value: 57.3
- type: accuracy
name: Old French Test accuracy
value: 36.4
- type: accuracy
name: Buryat Test accuracy
value: 55.9
- type: accuracy
name: Kaapor Test accuracy
value: 27.9
- type: accuracy
name: Korean Test accuracy
value: 53.4
- type: accuracy
name: Estonian Test accuracy
value: 57.4
- type: accuracy
name: Croatian Test accuracy
value: 59.3
- type: accuracy
name: Gothic Test accuracy
value: 22.2
- type: accuracy
name: Swiss German Test accuracy
value: 39.8
- type: accuracy
name: Assyrian Test accuracy
value: 16.1
- type: accuracy
name: North Sami Test accuracy
value: 38.4
- type: accuracy
name: Naija Test accuracy
value: 26.3
- type: accuracy
name: Latvian Test accuracy
value: 66.0
- type: accuracy
name: Chinese Test accuracy
value: 35.0
- type: accuracy
name: Tagalog Test accuracy
value: 63.4
- type: accuracy
name: Bambara Test accuracy
value: 27.8
- type: accuracy
name: Lithuanian Test accuracy
value: 68.2
- type: accuracy
name: Galician Test accuracy
value: 60.6
- type: accuracy
name: Vietnamese Test accuracy
value: 93.7
- type: accuracy
name: Greek Test accuracy
value: 54.1
- type: accuracy
name: Catalan Test accuracy
value: 55.0
- type: accuracy
name: Czech Test accuracy
value: 62.2
- type: accuracy
name: Erzya Test accuracy
value: 48.8
- type: accuracy
name: Bhojpuri Test accuracy
value: 44.4
- type: accuracy
name: Thai Test accuracy
value: 50.2
- type: accuracy
name: Marathi Test accuracy
value: 66.3
- type: accuracy
name: Basque Test accuracy
value: 59.2
- type: accuracy
name: Slovak Test accuracy
value: 63.1
- type: accuracy
name: Kiche Test accuracy
value: 38.7
- type: accuracy
name: Yoruba Test accuracy
value: 25.3
- type: accuracy
name: Warlpiri Test accuracy
value: 49.0
- type: accuracy
name: Tamil Test accuracy
value: 62.8
- type: accuracy
name: Maltese Test accuracy
value: 31.6
- type: accuracy
name: Ancient Greek Test accuracy
value: 44.9
- type: accuracy
name: Icelandic Test accuracy
value: 52.2
- type: accuracy
name: Mbya Guarani Test accuracy
value: 33.5
- type: accuracy
name: Urdu Test accuracy
value: 45.2
- type: accuracy
name: Romanian Test accuracy
value: 61.8
- type: accuracy
name: Persian Test accuracy
value: 57.3
- type: accuracy
name: Apurina Test accuracy
value: 46.2
- type: accuracy
name: Japanese Test accuracy
value: 25.5
- type: accuracy
name: Hungarian Test accuracy
value: 55.5
- type: accuracy
name: Hindi Test accuracy
value: 49.6
- type: accuracy
name: Classical Chinese Test accuracy
value: 22.4
- type: accuracy
name: Komi Permyak Test accuracy
value: 44.9
- type: accuracy
name: Faroese Test accuracy
value: 58.4
- type: accuracy
name: Sanskrit Test accuracy
value: 34.7
- type: accuracy
name: Livvi Test accuracy
value: 60.3
- type: accuracy
name: Arabic Test accuracy
value: 61.6
- type: accuracy
name: Wolof Test accuracy
value: 28.9
- type: accuracy
name: Bulgarian Test accuracy
value: 64.0
- type: accuracy
name: Akuntsu Test accuracy
value: 43.4
- type: accuracy
name: Makurap Test accuracy
value: 20.5
- type: accuracy
name: Kangri Test accuracy
value: 40.7
- type: accuracy
name: Breton Test accuracy
value: 53.0
- type: accuracy
name: Telugu Test accuracy
value: 64.6
- type: accuracy
name: Cantonese Test accuracy
value: 40.7
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 36.4
- type: accuracy
name: Karelian Test accuracy
value: 57.7
- type: accuracy
name: Upper Sorbian Test accuracy
value: 58.0
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 59.7
- type: accuracy
name: Komi Zyrian Test accuracy
value: 46.3
- type: accuracy
name: Irish Test accuracy
value: 48.9
- type: accuracy
name: Nayini Test accuracy
value: 42.3
- type: accuracy
name: Munduruku Test accuracy
value: 38.1
- type: accuracy
name: Manx Test accuracy
value: 35.2
- type: accuracy
name: Skolt Sami Test accuracy
value: 39.3
- type: accuracy
name: Afrikaans Test accuracy
value: 53.8
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 49.1
- type: accuracy
name: Belarusian Test accuracy
value: 66.3
- type: accuracy
name: Serbian Test accuracy
value: 58.3
- type: accuracy
name: Moksha Test accuracy
value: 46.6
- type: accuracy
name: Western Armenian Test accuracy
value: 58.2
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 43.8
- type: accuracy
name: Khunsari Test accuracy
value: 45.9
- type: accuracy
name: Hebrew Test accuracy
value: 75.0
- type: accuracy
name: Uyghur Test accuracy
value: 70.7
- type: accuracy
name: Chukchi Test accuracy
value: 33.1
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Vietnamese
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-vi")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-vi")
```
|
saattrupdan/kblab-voxrex-wav2vec2-large-cv8-da | 9cd5df38791018471740727d8050bfd25d36c0d4 | 2022-03-21T18:25:44.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"da",
"dataset:common_voice_8_0",
"transformers",
"license:cc0-1.0",
"model-index"
]
| automatic-speech-recognition | false | saattrupdan | null | saattrupdan/kblab-voxrex-wav2vec2-large-cv8-da | 6 | 1 | transformers | 15,445 | ---
language:
- da
license: cc0-1.0
tasks:
- automatic-speech-recognition
datasets:
- common_voice_8_0
metrics:
- wer
model-index:
- name: kblab-voxrex-wav2vec2-large-cv8-da
results:
- task:
type: automatic-speech-recognition
dataset:
type: mozilla-foundation/common_voice_8_0
args: da
name: Danish Common Voice 8.0
metrics:
- type: wer
value: 30.51
- task:
type: automatic-speech-recognition
dataset:
type: Alvenir/alvenir_asr_da_eval
name: Alvenir ASR test dataset
metrics:
- type: wer
value: 28.33
---
# KBLab-VoxRex-Wav2vec2-large-CV8-da
## Model description
This model is a fine-tuned version of the Swedish acoustic model [KBLab/wav2vec2-large-voxrex](https://huggingface.co/KBLab/wav2vec2-large-voxrex) on the Danish part of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), containing ~6 crowdsourced hours of read-aloud Danish speech.
## Performance
The model achieves the following WER scores (lower is better):
| **Dataset** | **WER without LM** | **WER with 5-gram LM** |
| :---: | ---: | ---: |
| [Danish part of Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0/viewer/da/train) | 37.63 | 30.51 |
| [Alvenir test set](https://huggingface.co/datasets/Alvenir/alvenir_asr_da_eval) | 35.75 | 28.33 | |
HungChau/distilbert-base-uncased-concept-extraction-kp20k-v1.2-concept-extraction-allwikipedia-v1.0 | 2ce7c5db770b4627ed46ebee3cb2ed2f6bee6859 | 2022-02-24T11:09:53.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-kp20k-v1.2-concept-extraction-allwikipedia-v1.0 | 6 | null | transformers | 15,446 | Entry not found |
anantoj/wav2vec2-large-xlsr-53-adult-child-cls | d380be1f1c029cf63bb72a09f26c3d45e99a88d2 | 2022-02-24T15:59:19.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| audio-classification | false | anantoj | null | anantoj/wav2vec2-large-xlsr-53-adult-child-cls | 6 | null | transformers | 15,447 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: wav2vec2-xls-r-300m-adult-child-cls
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-adult-child-cls
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1755
- Accuracy: 0.9432
- F1: 0.9472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.368 | 1.0 | 383 | 0.2560 | 0.9072 | 0.9126 |
| 0.2013 | 2.0 | 766 | 0.1959 | 0.9321 | 0.9362 |
| 0.22 | 3.0 | 1149 | 0.1755 | 0.9432 | 0.9472 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sarahlmk/autonlp-imdb-classification-596216804 | de68d78c80d6274570646a072cc7156089a60c32 | 2022-02-25T06:16:45.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:sarahlmk/autonlp-data-imdb-classification",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | sarahlmk | null | sarahlmk/autonlp-imdb-classification-596216804 | 6 | null | transformers | 15,448 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- sarahlmk/autonlp-data-imdb-classification
co2_eq_emissions: 274.81371614671764
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 596216804
- CO2 Emissions (in grams): 274.81371614671764
## Validation Metrics
- Loss: 0.24049481749534607
- Accuracy: 0.9239
- Precision: 0.9143695014662757
- Recall: 0.9354
- AUC: 0.9781644
- F1: 0.9247652001977262
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/sarahlmk/autonlp-imdb-classification-596216804
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("sarahlmk/autonlp-imdb-classification-596216804", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("sarahlmk/autonlp-imdb-classification-596216804", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
ASCCCCCCCC/distilbert-base-multilingual-cased-amazon_zh_20000 | f4d032af5ebdac7391ffabff245846152b008c2b | 2022-02-25T07:33:20.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | ASCCCCCCCC | null | ASCCCCCCCC/distilbert-base-multilingual-cased-amazon_zh_20000 | 6 | null | transformers | 15,449 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-multilingual-cased-amazon_zh_20000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-amazon_zh_20000
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3031
- Accuracy: 0.4406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.396 | 1.0 | 1250 | 1.3031 | 0.4406 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.18.3
- Tokenizers 0.10.3
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-slanted | e94b419108993c17b52a90fe421df9b34a0c98cd | 2022-02-25T09:31:34.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-asian-unclean-slanted | 6 | null | transformers | 15,450 | Entry not found |
DoyyingFace/bert-asian-hate-tweets-self-clean-small-warmup-100 | a61c18e21eff489aec98b8d24843c25eec406f53 | 2022-02-26T03:44:43.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-self-clean-small-warmup-100 | 6 | null | transformers | 15,451 | Entry not found |
msintaha/bert-base-uncased-copa-kb-27 | 3944786e733550b81d2eb083775b819ae6907606 | 2022-02-27T03:24:40.000Z | [
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"dataset:super_glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| multiple-choice | false | msintaha | null | msintaha/bert-base-uncased-copa-kb-27 | 6 | null | transformers | 15,452 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-copa-kb-27
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-copa-kb-27
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6114
- Accuracy: 0.7100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.6534 | 0.7400 |
| No log | 2.0 | 80 | 0.6114 | 0.7100 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
FardinSaboori/bert-finetuned-squad | 3223050ad77224f1c2a9b26dea136bbac8010605 | 2022-02-28T06:22:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | FardinSaboori | null | FardinSaboori/bert-finetuned-squad | 6 | null | transformers | 15,453 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Ebtihal/AraBertMo_base_V8 | 79cc73dd6f11c858866c098a6dcbe8e10632b275 | 2022-03-21T22:03:44.000Z | [
"pytorch",
"bert",
"fill-mask",
"ar",
"dataset:OSCAR",
"transformers",
"Fill-Mask",
"autotrain_compatible"
]
| fill-mask | false | Ebtihal | null | Ebtihal/AraBertMo_base_V8 | 6 | null | transformers | 15,454 | Arabic Model AraBertMo_base_V8
---
language: ar
tags: Fill-Mask
datasets: OSCAR
widget:
- text: " السلام عليكم ورحمة[MASK] وبركاتة"
- text: " اهلا وسهلا بكم في [MASK] من سيربح المليون"
- text: " مرحبا بك عزيزي الزائر [MASK] موقعنا "
---
# Arabic BERT Model
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERTMo_base uses the same BERT-Base config. AraBERTMo_base now comes in 10 new variants All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name. Checkpoints are available in PyTorch formats.
## Pretraining Corpus
`AraBertMo_base_V8' model was pre-trained on ~3 million words: [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
## Training results
this model achieves the following results:
| Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|
| Fill-Mask| 40032| 8 | 64 | 5008 | 10h 5m 57s | 7.2164 |
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V8")
model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V8")
```
## This model was built for master's degree research in an organization:
- [University of kufa](https://uokufa.edu.iq/).
- [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/).
- **Department of Computer Science**
|
peterhsu/mt5-small-finetuned-amazon-en-es | df5ad96888c11ef68f58ddf61640354259cce38c | 2022-02-28T18:40:06.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| translation | false | peterhsu | null | peterhsu/mt5-small-finetuned-amazon-en-es | 6 | null | transformers | 15,455 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0255
- Rouge1: 17.5202
- Rouge2: 8.4634
- Rougel: 17.0175
- Rougelsum: 17.0528
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 8.094 | 1.0 | 1209 | 3.2933 | 12.7563 | 5.2606 | 12.4786 | 12.4961 |
| 3.9263 | 2.0 | 2418 | 3.1487 | 16.2314 | 8.4716 | 15.6854 | 15.7506 |
| 3.599 | 3.0 | 3627 | 3.0789 | 16.9233 | 8.1928 | 16.2596 | 16.2522 |
| 3.429 | 4.0 | 4836 | 3.0492 | 17.2679 | 8.7561 | 16.6685 | 16.7399 |
| 3.3279 | 5.0 | 6045 | 3.0384 | 17.6081 | 8.6721 | 17.0546 | 17.0368 |
| 3.2518 | 6.0 | 7254 | 3.0343 | 17.2271 | 8.504 | 16.6285 | 16.6209 |
| 3.2084 | 7.0 | 8463 | 3.0255 | 16.7859 | 8.054 | 16.2574 | 16.2853 |
| 3.1839 | 8.0 | 9672 | 3.0255 | 17.5202 | 8.4634 | 17.0175 | 17.0528 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
PhilSad/GPT-J6B-Guided-SCP | 84243966f02befd079b8d67f610b72a0e1eb91d0 | 2022-03-06T22:52:07.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
]
| text-generation | false | PhilSad | null | PhilSad/GPT-J6B-Guided-SCP | 6 | null | transformers | 15,456 | Attempt of guided text generation to replace GPT-3 for :[This SCP Does Not Exist](https://www.thisscpdoesnotexist.ml)
Work in Porgress
Finetuned on a dataset of 1700 automatically generated samples from the [official SCP wiki](https://scp-wiki.wikidot.com/)
Exemple input :
```Prompt: SCP-9741 is a pair of jeans that looks really cool ### Generation: Item #: SCP-9741\nObject Class: Safe\nSpecial Containment Procedures:```
# Acknowledgment
This work was made possible thanks to the TPU Research Cloud program by Google
|
armageddon/distilbert-base-uncased-squad2-covid-qa-deepset | d8c7108e9f29b229ed6467b0439d197fe65543a8 | 2022-03-01T08:32:06.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:covid_qa_deepset",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | armageddon | null | armageddon/distilbert-base-uncased-squad2-covid-qa-deepset | 6 | null | transformers | 15,457 | ---
tags:
- generated_from_trainer
datasets:
- covid_qa_deepset
model-index:
- name: distilbert-base-uncased-squad2-covid-qa-deepset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad2-covid-qa-deepset
This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
coastalcph/fairlex-fscs-minilm | a190ec4f1e2c999ede159a36a4a125d97cdb4aed | 2022-03-01T13:36:58.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"de",
"fr",
"it",
"transformers",
"legal",
"fairlex",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
]
| fill-mask | false | coastalcph | null | coastalcph/fairlex-fscs-minilm | 6 | null | transformers | 15,458 | ---
language:
- de
- fr
- it
pipeline_tag: fill-mask
license: cc-by-nc-sa-4.0
tags:
- legal
- fairlex
widget:
- text: "Aus seinem damaligen strafbaren Verhalten resultierte eine Forderung der Nachlassverwaltung eines <mask>, worüber eine aussergerichtliche Vereinbarung über Fr. 500'000."
- text: " Elle avait pour but social les <mask> dans le domaine des changes, en particulier l'exploitation d'une plateforme internet."
- text: "Il Pretore ha accolto la petizione con sentenza 16 luglio 2015, accordando all'attore l'importo <mask>, con interessi di mora a partire dalla notifica del precetto esecutivo, e ha rigettato in tale misura l'opposizione interposta a quest'ultimo."
---
# FairLex: A multilingual benchmark for evaluating fairness in legal text processing
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
---
Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.
---
## Pre-training details
For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC).
We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads.
We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019).
For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC).
## Models list
| Model name | Training corpora | Language |
|-----------------------------------|------------------|--------------------|
| `coastalcph/fairlex-ecthr-minlm` | ECtHR | `en` |
| `coastalcph/fairlex-scotus-minlm` | SCOTUS | `en` |
| `coastalcph/fairlex-fscs-minlm` | FSCS | [`de`, `fr`, `it`] |
| `coastalcph/fairlex-cail-minlm` | CAIL | `zh` |
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("coastalcph/fairlex-fscs-minlm")
model = AutoModel.from_pretrained("coastalcph/fairlex-fscs-minlm")
```
## Evaluation on downstream tasks
Consider the experiments in the article:
_Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland._
## Author - Publication
```
@inproceedings{chalkidis-2022-fairlex,
author={Chalkidis, Ilias and Passini, Tommaso and Zhang, Sheng and
Tomada, Letizia and Schwemer, Sebastian Felix and Søgaard, Anders},
title={FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
year={2022},
address={Dublin, Ireland}
}
```
Ilias Chalkidis on behalf of [CoAStaL NLP Group](https://coastalcph.github.io)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) | |
batterydata/batterybert-cased | 0106ed8cca65ec6302ada7123048be9c37e31a7d | 2022-03-05T16:20:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"en",
"dataset:batterypapers",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | batterydata | null | batterydata/batterybert-cased | 6 | null | transformers | 15,459 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- batterypapers
---
# BatteryBERT-uncased model
Pretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the [bert-base-cased](https://huggingface.co/bert-base-cased) weights. It was introduced in
[this paper](paper_link) and first released in
[this repository](https://github.com/ShuHuang/batterybert). This model is case-sensitive: it makes a difference between english and English.
## Model description
BatteryBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the [bert-base-cased](https://huggingface.co/bert-base-cased) weights. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Training data
The BatteryBERT model was pretrained on the full text of battery papers only, after initialized from the [bert-base-cased](https://huggingface.co/bert-base-cased) weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at [Github](https://github.com/ShuHuang/batterybert/blob/main/corpus.txt).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 28,996. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?filter=batterybert) to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='batterydata/batterybert-cased')
>>> unmasker("Hello I'm a <mask> model.")
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batterybert-cased')
model = BertModel.from_pretrained('batterydata/batterybert-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batterybert-cased')
model = TFBertModel.from_pretrained('batterydata/batterybert-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
Final loss: 0.9609.
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
alk/distilbert-base-uncased-finetuned-emotion | 6be3c577203e4a043983b3bb82956e22d57096a3 | 2022-03-01T23:56:36.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | alk | null | alk/distilbert-base-uncased-finetuned-emotion | 6 | null | transformers | 15,460 | Entry not found |
BAHIJA/distilbert-base-uncased-finetuned-cola | 36cf32cc6c0648fda3c472ccdc9d8ce57d624029 | 2022-03-13T23:42:41.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | BAHIJA | null | BAHIJA/distilbert-base-uncased-finetuned-cola | 6 | null | transformers | 15,461 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5481326292844919
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7371
- Matthews Correlation: 0.5481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5298 | 1.0 | 535 | 0.5333 | 0.4142 |
| 0.3619 | 2.0 | 1070 | 0.5174 | 0.5019 |
| 0.2449 | 3.0 | 1605 | 0.6394 | 0.4921 |
| 0.1856 | 4.0 | 2140 | 0.7371 | 0.5481 |
| 0.133 | 5.0 | 2675 | 0.8600 | 0.5327 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
msintaha/gpt2-finetuned-rocstories | ad0bedc880fec721fa48faa759ff0f213923b50c | 2022-03-02T07:07:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | msintaha | null | msintaha/gpt2-finetuned-rocstories | 6 | null | transformers | 15,462 | Entry not found |
emekaboris/autonlp-new_tx-607517182 | 9ca19c489190b4a5a9a793718f45350fba2818d1 | 2022-03-02T14:51:04.000Z | [
"pytorch",
"roberta",
"text-classification",
"unk",
"dataset:emekaboris/autonlp-data-new_tx",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | emekaboris | null | emekaboris/autonlp-new_tx-607517182 | 6 | null | transformers | 15,463 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- emekaboris/autonlp-data-new_tx
co2_eq_emissions: 3.842950628218143
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 607517182
- CO2 Emissions (in grams): 3.842950628218143
## Validation Metrics
- Loss: 0.4033123552799225
- Accuracy: 0.8679706601466992
- Macro F1: 0.719846919916469
- Micro F1: 0.8679706601466993
- Weighted F1: 0.8622411469250695
- Macro Precision: 0.725309168791155
- Micro Precision: 0.8679706601466992
- Weighted Precision: 0.8604370906049568
- Macro Recall: 0.7216672806300003
- Micro Recall: 0.8679706601466992
- Weighted Recall: 0.8679706601466992
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/emekaboris/autonlp-new_tx-607517182
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("emekaboris/autonlp-new_tx-607517182", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("emekaboris/autonlp-new_tx-607517182", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
luffycodes/reg-roberta-large-mrpc | f9eba78d4a7b1889016e7df14da49a1306c2f4cf | 2022-04-05T02:55:11.000Z | [
"pytorch",
"roberta",
"transformers"
]
| null | false | luffycodes | null | luffycodes/reg-roberta-large-mrpc | 6 | null | transformers | 15,464 | Entry not found |
DoyyingFace/bert-asian-hate-tweets-self-unclean-large | f1785116c068c566577dd98c13c6906104a0aef1 | 2022-03-03T09:05:42.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-self-unclean-large | 6 | null | transformers | 15,465 | Entry not found |
danielmaxwell/distilbert-base-uncased-finetuned-emotion | ecde6e825a27c69207f048eabf143e5658069d64 | 2022-03-03T16:37:27.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | danielmaxwell | null | danielmaxwell/distilbert-base-uncased-finetuned-emotion | 6 | null | transformers | 15,466 | Entry not found |
everdoubling/byt5-Korean-large | d6a9809b504b53f1698138e28694071fa29f26bc | 2022-03-11T09:16:25.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:mc4",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | everdoubling | null | everdoubling/byt5-Korean-large | 6 | 1 | transformers | 15,467 | ---
datasets:
- mc4
license: apache-2.0
---
# ByT5-Korean - large
ByT5-Korean is a Korean specific extension of Google's [ByT5](https://github.com/google-research/byt5).
A Korean syllable has three components (called Jamo): a beginning consonant, a middle vowel, and an optional final consonant; they are like individual characters of alphabet.
While the ByT5's utf-8 encoding allows generic encoding for multiple languages, it is unnatural for Korean because it splits the bits representation of each Jamo in the middle.
ByT5-Korean extends ByT5's utf-8 encoding with special care for Korean syllables; each Jamo is represented with a extra token.
ByT5-Korean was pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) with 70% Korean and 30% English.
## Encoding Scheme
```text
id: token
0: <pad>
1: <eos>
2: <unk>
3~258: utf-8 encoding
259~277: beginning consonants(초성), 19개(ㄱㄲㄴㄷㄸㄹㅁㅂㅃㅅㅆㅇㅈㅉㅊㅋㅌㅍㅎ)
278~298: middle vowel(중성), 21개(ㅏㅐㅑㅒㅓㅔㅕㅖㅗㅘㅙㅚㅛㅜㅝㅞㅟㅠㅡㅢㅣ)
299~326: final consonant(종성), 무종성+27개(ㄱㄲㄳㄴㄵㄶㄷㄹㄺㄻㄼㄽㄾㄿㅀㅁㅂㅄㅅㅆㅇㅈㅊㅋㅌㅍㅎ)
327~384: from <extra_id_0> to <extra_id_57>
```
## Example Inference
```python
import torch
from tokenizer import ByT5KoreanTokenizer # https://huggingface.co/everdoubling/byt5-Korean-large/blob/main/tokenizer.py
from transformers import T5ForConditionalGeneration
tokenizer_jamo = ByT5KoreanTokenizer()
model = T5ForConditionalGeneration.from_pretrained('everdoubling/byt5-Korean-large')
input_sentence = '한국어 위키백과(영어: Korean Wikipedia)는 한국어로 운영되는 위키백과의 다언어판 가운데 하나로서, 2002년 10월 11일에 <extra_id_0>. 또한 현재 한국어 위키백과에는 넘겨주기, 토론, 그림 등 페이지로 불리는 모든 문서를 포함하면 총 2,629,860개가 <extra_id_1>되어 있으며, 넘겨주기를 포함한 일반 문서 수는 1,278,560개,[1] 그중 넘겨주기, 막다른 문서를 제외한 일반 문서 수는 573,149개이다.'
input_ids_jamo = tokenizer_jamo(input_sentence).input_ids
outputs_jamo = model_jamo.generate(torch.tensor([input_ids_jamo]))
print(tokenizer_jamo.decode(outputs_jamo[0]))
# <pad><extra_id_0>설립되었다<extra_id_1>đě
```
Additional information coming soon...
|
petrichorRainbow/mrf-GPT | 27bbb57829c8384eeeddf23616cf7abc89f079cd | 2022-03-07T18:51:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | petrichorRainbow | null | petrichorRainbow/mrf-GPT | 6 | null | transformers | 15,468 | Entry not found |
crabz/distil-slovakbert-ner | aa6d6ce92a86aaebd1934e8ae3e62f7099f46972 | 2022-03-06T12:40:16.000Z | [
"pytorch",
"roberta",
"token-classification",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | crabz | null | crabz/distil-slovakbert-ner | 6 | null | transformers | 15,469 | ---
tags:
- generated_from_trainer
datasets:
- wikiann
inference: false
model-index:
- name: distil-slovakbert-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil-slovakbert-ner
This model is a fine-tuned version of [crabz/distil-slovakbert](https://huggingface.co/crabz/distil-slovakbert) on the wikiann sk dataset.
- F1: 0.9307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.15.1
- Tokenizers 0.11.0
|
billfrench/autonlp-cyberlandr-ai-4-614417501 | 698148eea85330ddefbfff950f65ec147d7dc75f | 2022-03-07T00:57:12.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:billfrench/autonlp-data-cyberlandr-ai-4",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | billfrench | null | billfrench/autonlp-cyberlandr-ai-4-614417501 | 6 | null | transformers | 15,470 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- billfrench/autonlp-data-cyberlandr-ai-4
co2_eq_emissions: 1.6912535041856878
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 614417501
- CO2 Emissions (in grams): 1.6912535041856878
## Validation Metrics
- Loss: 1.305419921875
- Accuracy: 0.5
- Macro F1: 0.3333333333333333
- Micro F1: 0.5
- Weighted F1: 0.4444444444444444
- Macro Precision: 0.375
- Micro Precision: 0.5
- Weighted Precision: 0.5
- Macro Recall: 0.375
- Micro Recall: 0.5
- Weighted Recall: 0.5
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/billfrench/autonlp-cyberlandr-ai-4-614417501
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("billfrench/autonlp-cyberlandr-ai-4-614417501", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("billfrench/autonlp-cyberlandr-ai-4-614417501", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
bongbongco/bert-badword-puri-000 | 58ad7d80caada06da427c935da8d8454216ab944 | 2022-03-07T06:16:47.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | bongbongco | null | bongbongco/bert-badword-puri-000 | 6 | null | transformers | 15,471 | Entry not found |
zhiweitong/dpr-answer_encoder-single-nq-base | 6cca57a9d47073df6420912282f4350cc609b83c | 2022-03-08T07:25:05.000Z | [
"pytorch",
"dpr",
"feature-extraction",
"en",
"dataset:natural_questions",
"transformers"
]
| feature-extraction | false | zhiweitong | null | zhiweitong/dpr-answer_encoder-single-nq-base | 6 | null | transformers | 15,472 | ---
language: en
datasets:
- natural_questions
---
# dpr-answer_encoder-single-nq-base
This encoder is used with [zhiweitong/dpr-ctx_encoder-single-nq-base](https://huggingface.co/zhiweitong/dpr-ctx_encoder-single-nq-base)
|
KoichiYasuoka/roberta-base-ukrainian | 9ac1bcbde4e8aa8c7729ce2c9a787b148caa2742 | 2022-03-08T23:33:19.000Z | [
"pytorch",
"roberta",
"fill-mask",
"uk",
"transformers",
"ukrainian",
"masked-lm",
"ubertext",
"license:cc-by-sa-4.0",
"autotrain_compatible"
]
| fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/roberta-base-ukrainian | 6 | null | transformers | 15,473 | ---
language:
- "uk"
tags:
- "ukrainian"
- "masked-lm"
- "ubertext"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
---
# roberta-base-ukrainian
## Model Description
This is a RoBERTa model pre-trained on [Корпус UberText](https://lang.org.ua/uk/corpora/#anchor4). You can fine-tune `roberta-base-ukrainian` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-ukrainian-upos), dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-ukrainian")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-ukrainian")
```
|
alirezafarashah/wav2vec2-base-ks-2sec | 8aed995bfd1daf1a7749cfa57d5a3267327b183c | 2022-03-09T22:14:03.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"dataset:superb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| audio-classification | false | alirezafarashah | null | alirezafarashah/wav2vec2-base-ks-2sec | 6 | null | transformers | 15,474 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-ks-2sec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ks-2sec
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0880
- Accuracy: 0.9822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 0.5003 | 1.0 | 399 | 0.9643 | 0.4284 |
| 0.1868 | 2.0 | 798 | 0.9748 | 0.1628 |
| 0.1413 | 3.0 | 1197 | 0.9796 | 0.1128 |
| 0.1021 | 4.0 | 1596 | 0.9813 | 0.0940 |
| 0.1089 | 5.0 | 1995 | 0.0880 | 0.9822 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
vzty/bert-base-uncased-finetuned-argument-detection | 9d893348b779e933eb4837a7eaf9607874a40027 | 2022-03-09T08:01:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | vzty | null | vzty/bert-base-uncased-finetuned-argument-detection | 6 | null | transformers | 15,475 | Entry not found |
Narshion/mWACH_mBERT_System | 3af7fcda879c56a9d830fa60764c8cc022c31b68 | 2022-03-09T13:49:35.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Narshion | null | Narshion/mWACH_mBERT_System | 6 | null | transformers | 15,476 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on mWACH NEO dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.12.4
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
ctoraman/RoBERTa-TR-medium-char | 630bc8b2c4b9b2fdf89a28bfff6ecd89a474c916 | 2022-04-20T06:56:43.000Z | [
"pytorch",
"roberta",
"fill-mask",
"tr",
"dataset:oscar",
"arxiv:2204.08832",
"transformers",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
]
| fill-mask | false | ctoraman | null | ctoraman/RoBERTa-TR-medium-char | 6 | null | transformers | 15,477 | ---
language:
- tr
tags:
- roberta
license: cc-by-nc-sa-4.0
datasets:
- oscar
---
# RoBERTa Turkish medium Character-level (uncased)
Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased.
The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned.
Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is Character-level, which means that text is split by individual characters. Vocabulary size is 384.
The details and performance comparisons can be found at this paper:
https://arxiv.org/abs/2204.08832
## Note that this model does not include a tokenizer file, because it uses ByT5Tokenizer. The following code can be used for model loading and tokenization, example max length(1024) can be changed:
```
model = AutoModel.from_pretrained([model_path])
#for sequence classification:
#model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes])
tokenizer = ByT5Tokenizer.from_pretrained("google/byt5-small")
tokenizer.mask_token = tokenizer.special_tokens_map_extended['additional_special_tokens'][0]
tokenizer.cls_token = tokenizer.special_tokens_map_extended['additional_special_tokens'][1]
tokenizer.bos_token = tokenizer.special_tokens_map_extended['additional_special_tokens'][1]
tokenizer.sep_token = tokenizer.special_tokens_map_extended['additional_special_tokens'][2]
tokenizer.eos_token = tokenizer.special_tokens_map_extended['additional_special_tokens'][2]
tokenizer.pad_token = tokenizer.special_tokens_map_extended['additional_special_tokens'][3]
tokenizer.unk_token = tokenizer.special_tokens_map_extended['additional_special_tokens'][3]
tokenizer.model_max_length = 1024
```
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2204.08832,
doi = {10.48550/ARXIV.2204.08832},
url = {https://arxiv.org/abs/2204.08832},
author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Impact of Tokenization on Language Models: An Analysis for Turkish},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
``` |
hyechanjun/reverse-interview-question | bcc24e8824d144c6c46a78b953ceb16522be2ca9 | 2022-03-09T18:57:52.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | hyechanjun | null | hyechanjun/reverse-interview-question | 6 | null | transformers | 15,478 | An AI model that, given a statement, generates a question that would have likely resulted in said statement.
Created for a Senior Project at Calvin University. |
nielsr/bert-finetuned-ner | d6d48c55b6b9b53b400dcc65895671f41c19cfc7 | 2022-03-10T07:59:41.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | nielsr | null | nielsr/bert-finetuned-ner | 6 | null | transformers | 15,479 | This is a BERT model fine-tuned on a named-entity recognition (NER) dataset.
The notebook that was used to create this model can be found here: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/BERT/Custom_Named_Entity_Recognition_with_BERT.ipynb |
chiragme/autonlp-imdb-sentiment-analysis-623817873 | ce0eb41c2ee07010601e9cee45d5805f6629b259 | 2022-03-10T03:28:02.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:chiragme/autonlp-data-imdb-sentiment-analysis",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | chiragme | null | chiragme/autonlp-imdb-sentiment-analysis-623817873 | 6 | null | transformers | 15,480 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- chiragme/autonlp-data-imdb-sentiment-analysis
co2_eq_emissions: 147.38973865706626
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 623817873
- CO2 Emissions (in grams): 147.38973865706626
## Validation Metrics
- Loss: 0.2412157654762268
- Accuracy: 0.9306
- Precision: 0.9377795851972347
- Recall: 0.9224
- AUC: 0.97000504
- F1: 0.9300262149626941
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/chiragme/autonlp-imdb-sentiment-analysis-623817873
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("chiragme/autonlp-imdb-sentiment-analysis-623817873", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("chiragme/autonlp-imdb-sentiment-analysis-623817873", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
SAI2-EXP/TNANA-th-th | 29b80a243cf1d7326cc0277539f67e97d1ab0dcb | 2022-03-07T05:56:03.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | SAI2-EXP | null | SAI2-EXP/TNANA-th-th | 6 | null | transformers | 15,481 | ---
license: apache-2.0
---
|
danielbubiola/fine_tuned_text_clf_model | 08b25cd9268f2c7fe2afcd4373f488b3fa06a75b | 2022-03-10T11:10:40.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | danielbubiola | null | danielbubiola/fine_tuned_text_clf_model | 6 | null | transformers | 15,482 | Entry not found |
Someshfengde/autonlp-kaggledays-625717986 | 7f852a4641f7b6a3b590e20a1f36c3a2fe2d447a | 2022-03-10T15:27:01.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:Someshfengde/autonlp-data-kaggledays",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | Someshfengde | null | Someshfengde/autonlp-kaggledays-625717986 | 6 | null | transformers | 15,483 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Someshfengde/autonlp-data-kaggledays
co2_eq_emissions: 68.73074770596023
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 625717986
- CO2 Emissions (in grams): 68.73074770596023
## Validation Metrics
- Loss: 0.859463632106781
- Accuracy: 0.6118427330852181
- Macro F1: 0.6112554383858383
- Micro F1: 0.6118427330852181
- Weighted F1: 0.6112706859556324
- Macro Precision: 0.6121119616189625
- Micro Precision: 0.6118427330852181
- Weighted Precision: 0.6121068719118146
- Macro Recall: 0.6118067898609261
- Micro Recall: 0.6118427330852181
- Weighted Recall: 0.6118427330852181
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Someshfengde/autonlp-kaggledays-625717986
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Someshfengde/autonlp-kaggledays-625717986", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Someshfengde/autonlp-kaggledays-625717986", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
muneson/nb-wav2vec2-300m-nynorsk | 554c6ea7181693a1e67a5fff2ad02b78f725cb14 | 2022-03-13T05:26:51.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"NbAiLab/NPSC",
"generated_from_trainer",
"license:cc0-1.0",
"model-index"
]
| automatic-speech-recognition | false | muneson | null | muneson/nb-wav2vec2-300m-nynorsk | 6 | null | transformers | 15,484 | ---
license: cc0-1.0
tags:
- automatic-speech-recognition
- NbAiLab/NPSC
- generated_from_trainer
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [KBLab/wav2vec2-large-voxrex](https://huggingface.co/KBLab/wav2vec2-large-voxrex) on the NBAILAB/NPSC - 16K_MP3_NYNORSK dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4929
- Wer: 0.1455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 80.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.0168 | 0.54 | 500 | 3.0478 | 1.0 |
| 2.8486 | 1.08 | 1000 | 2.7863 | 1.0 |
| 1.0509 | 1.62 | 1500 | 0.8737 | 0.5449 |
| 0.7873 | 2.16 | 2000 | 0.6718 | 0.4292 |
| 0.6987 | 2.7 | 2500 | 0.5497 | 0.3589 |
| 0.5548 | 3.24 | 3000 | 0.4841 | 0.3145 |
| 0.5421 | 3.78 | 3500 | 0.4569 | 0.2927 |
| 0.4416 | 4.31 | 4000 | 0.4702 | 0.2822 |
| 0.4388 | 4.85 | 4500 | 0.4145 | 0.2641 |
| 0.4011 | 5.39 | 5000 | 0.4033 | 0.2565 |
| 0.3959 | 5.93 | 5500 | 0.4127 | 0.2450 |
| 0.3643 | 6.47 | 6000 | 0.3972 | 0.2420 |
| 0.3594 | 7.01 | 6500 | 0.3882 | 0.2392 |
| 0.3315 | 7.55 | 7000 | 0.3714 | 0.2337 |
| 0.3131 | 8.09 | 7500 | 0.3964 | 0.2313 |
| 0.3192 | 8.63 | 8000 | 0.3711 | 0.2268 |
| 0.2855 | 9.17 | 8500 | 0.3815 | 0.2293 |
| 0.2756 | 9.71 | 9000 | 0.3653 | 0.2187 |
| 0.248 | 10.25 | 9500 | 0.3929 | 0.2093 |
| 0.2428 | 10.79 | 10000 | 0.3641 | 0.1986 |
| 0.2412 | 11.33 | 10500 | 0.3687 | 0.1978 |
| 0.2455 | 11.87 | 11000 | 0.3942 | 0.2005 |
| 0.2181 | 12.41 | 11500 | 0.3611 | 0.1876 |
| 0.2321 | 12.94 | 12000 | 0.3586 | 0.1940 |
| 0.2132 | 13.48 | 12500 | 0.3904 | 0.1892 |
| 0.2162 | 14.02 | 13000 | 0.3812 | 0.1867 |
| 0.205 | 14.56 | 13500 | 0.3751 | 0.1839 |
| 0.1757 | 15.1 | 14000 | 0.3722 | 0.1816 |
| 0.1722 | 15.64 | 14500 | 0.3873 | 0.1793 |
| 0.1862 | 16.18 | 15000 | 0.3924 | 0.1790 |
| 0.1549 | 16.72 | 15500 | 0.3719 | 0.1782 |
| 0.1616 | 17.26 | 16000 | 0.3570 | 0.1830 |
| 0.1646 | 17.8 | 16500 | 0.3867 | 0.1839 |
| 0.1541 | 18.34 | 17000 | 0.3944 | 0.1817 |
| 0.165 | 18.88 | 17500 | 0.3909 | 0.1806 |
| 0.152 | 19.42 | 18000 | 0.3883 | 0.1766 |
| 0.1532 | 19.96 | 18500 | 0.3732 | 0.1783 |
| 0.1498 | 20.5 | 19000 | 0.3931 | 0.1713 |
| 0.1424 | 21.04 | 19500 | 0.4205 | 0.1730 |
| 0.1394 | 21.57 | 20000 | 0.4291 | 0.1710 |
| 0.1407 | 22.11 | 20500 | 0.4239 | 0.1757 |
| 0.1275 | 22.65 | 21000 | 0.4171 | 0.1719 |
| 0.1262 | 23.19 | 21500 | 0.4346 | 0.1706 |
| 0.1301 | 23.73 | 22000 | 0.4281 | 0.1650 |
| 0.1342 | 24.27 | 22500 | 0.4469 | 0.1680 |
| 0.1249 | 24.81 | 23000 | 0.4297 | 0.1709 |
| 0.1143 | 25.35 | 23500 | 0.4130 | 0.1665 |
| 0.1121 | 25.89 | 24000 | 0.4458 | 0.1633 |
| 0.1206 | 26.43 | 24500 | 0.4597 | 0.1663 |
| 0.1142 | 26.97 | 25000 | 0.3961 | 0.1726 |
| 0.1025 | 27.51 | 25500 | 0.3985 | 0.1629 |
| 0.0961 | 28.05 | 26000 | 0.4002 | 0.1629 |
| 0.1253 | 28.59 | 26500 | 0.4256 | 0.1624 |
| 0.1228 | 29.13 | 27000 | 0.4308 | 0.1653 |
| 0.1034 | 29.67 | 27500 | 0.4354 | 0.1646 |
| 0.0853 | 30.2 | 28000 | 0.4200 | 0.1588 |
| 0.0936 | 30.74 | 28500 | 0.4748 | 0.1596 |
| 0.1015 | 31.28 | 29000 | 0.4383 | 0.1651 |
| 0.1 | 31.82 | 29500 | 0.4436 | 0.1659 |
| 0.1087 | 32.36 | 30000 | 0.4121 | 0.1596 |
| 0.1084 | 32.9 | 30500 | 0.4297 | 0.1602 |
| 0.0855 | 33.44 | 31000 | 0.4453 | 0.1645 |
| 0.0872 | 33.98 | 31500 | 0.4377 | 0.1605 |
| 0.0893 | 34.52 | 32000 | 0.4373 | 0.1556 |
| 0.0864 | 35.06 | 32500 | 0.4244 | 0.1607 |
| 0.08 | 35.6 | 33000 | 0.3972 | 0.1615 |
| 0.1025 | 36.14 | 33500 | 0.4481 | 0.1580 |
| 0.099 | 36.68 | 34000 | 0.4224 | 0.1613 |
| 0.083 | 37.22 | 34500 | 0.4499 | 0.1577 |
| 0.0783 | 37.76 | 35000 | 0.4649 | 0.1558 |
| 0.0856 | 38.3 | 35500 | 0.4493 | 0.1546 |
| 0.0888 | 38.83 | 36000 | 0.4313 | 0.1530 |
| 0.0752 | 39.37 | 36500 | 0.4737 | 0.1544 |
| 0.0723 | 39.91 | 37000 | 0.4539 | 0.1549 |
| 0.0785 | 40.45 | 37500 | 0.4585 | 0.1550 |
| 0.0686 | 40.99 | 38000 | 0.4489 | 0.1564 |
| 0.08 | 41.53 | 38500 | 0.4569 | 0.1553 |
| 0.0699 | 42.07 | 39000 | 0.4791 | 0.1551 |
| 0.066 | 42.61 | 39500 | 0.4807 | 0.1530 |
| 0.072 | 43.15 | 40000 | 0.4456 | 0.1570 |
| 0.0818 | 43.69 | 40500 | 0.4544 | 0.1582 |
| 0.0741 | 44.23 | 41000 | 0.4646 | 0.1573 |
| 0.0691 | 44.77 | 41500 | 0.4576 | 0.1531 |
| 0.0605 | 45.31 | 42000 | 0.4776 | 0.1558 |
| 0.0705 | 45.85 | 42500 | 0.4468 | 0.1562 |
| 0.0671 | 46.39 | 43000 | 0.4782 | 0.1563 |
| 0.0612 | 46.93 | 43500 | 0.4761 | 0.1542 |
| 0.0588 | 47.46 | 44000 | 0.4846 | 0.1534 |
| 0.0752 | 48.0 | 44500 | 0.4972 | 0.1554 |
| 0.0595 | 48.54 | 45000 | 0.4784 | 0.1546 |
| 0.0591 | 49.08 | 45500 | 0.4750 | 0.1609 |
| 0.0594 | 49.62 | 46000 | 0.4641 | 0.1593 |
| 0.0539 | 50.16 | 46500 | 0.4746 | 0.1545 |
| 0.0605 | 50.7 | 47000 | 0.4535 | 0.1586 |
| 0.0515 | 51.24 | 47500 | 0.4701 | 0.1577 |
| 0.058 | 51.78 | 48000 | 0.4667 | 0.1554 |
| 0.0503 | 52.32 | 48500 | 0.4747 | 0.1527 |
| 0.0536 | 52.86 | 49000 | 0.4914 | 0.1494 |
| 0.0569 | 53.4 | 49500 | 0.4869 | 0.1789 |
| 0.0711 | 53.94 | 50000 | 0.4863 | 0.1534 |
| 0.0605 | 54.48 | 50500 | 0.4533 | 0.1533 |
| 0.085 | 55.02 | 51000 | 0.4679 | 0.1545 |
| 0.05 | 55.56 | 51500 | 0.4699 | 0.1528 |
| 0.0577 | 56.09 | 52000 | 0.4865 | 0.1521 |
| 0.0494 | 56.63 | 52500 | 0.4852 | 0.1524 |
| 0.056 | 57.17 | 53000 | 0.4923 | 0.1508 |
| 0.056 | 57.71 | 53500 | 0.5102 | 0.1526 |
| 0.0515 | 58.25 | 54000 | 0.4989 | 0.1502 |
| 0.0465 | 58.79 | 54500 | 0.4852 | 0.1471 |
| 0.0537 | 59.33 | 55000 | 0.4716 | 0.1507 |
| 0.0494 | 59.87 | 55500 | 0.4852 | 0.1502 |
| 0.0482 | 60.41 | 56000 | 0.4887 | 0.1494 |
| 0.0574 | 60.95 | 56500 | 0.4689 | 0.1504 |
| 0.0558 | 61.49 | 57000 | 0.4683 | 0.1509 |
| 0.0509 | 62.03 | 57500 | 0.4923 | 0.1501 |
| 0.0484 | 62.57 | 58000 | 0.4871 | 0.1488 |
| 0.0512 | 63.11 | 58500 | 0.4751 | 0.1514 |
| 0.0502 | 63.65 | 59000 | 0.4805 | 0.1510 |
| 0.0466 | 64.19 | 59500 | 0.4939 | 0.1515 |
| 0.0518 | 64.72 | 60000 | 0.4840 | 0.1514 |
| 0.038 | 65.26 | 60500 | 0.4927 | 0.1511 |
| 0.0552 | 65.8 | 61000 | 0.4910 | 0.1490 |
| 0.0529 | 66.34 | 61500 | 0.4772 | 0.1484 |
| 0.0515 | 66.88 | 62000 | 0.4688 | 0.1482 |
| 0.0528 | 67.42 | 62500 | 0.4675 | 0.1472 |
| 0.0564 | 67.96 | 63000 | 0.4735 | 0.1483 |
| 0.0466 | 68.5 | 63500 | 0.4884 | 0.1460 |
| 0.0551 | 69.04 | 64000 | 0.4771 | 0.1479 |
| 0.0436 | 69.58 | 64500 | 0.4881 | 0.1489 |
| 0.043 | 70.12 | 65000 | 0.4847 | 0.1473 |
| 0.0529 | 70.66 | 65500 | 0.4846 | 0.1478 |
| 0.0434 | 71.2 | 66000 | 0.4921 | 0.1477 |
| 0.0395 | 71.74 | 66500 | 0.4961 | 0.1471 |
| 0.0398 | 72.28 | 67000 | 0.4940 | 0.1473 |
| 0.0405 | 72.82 | 67500 | 0.4891 | 0.1465 |
| 0.0404 | 73.35 | 68000 | 0.4880 | 0.1462 |
| 0.0478 | 73.89 | 68500 | 0.4937 | 0.1468 |
| 0.0388 | 74.43 | 69000 | 0.4868 | 0.1464 |
| 0.0426 | 74.97 | 69500 | 0.4965 | 0.1458 |
| 0.0382 | 75.51 | 70000 | 0.4999 | 0.1460 |
| 0.0426 | 76.05 | 70500 | 0.4944 | 0.1466 |
| 0.0459 | 76.59 | 71000 | 0.4978 | 0.1463 |
| 0.0366 | 77.13 | 71500 | 0.5010 | 0.1466 |
| 0.0511 | 77.67 | 72000 | 0.4920 | 0.1453 |
| 0.045 | 78.21 | 72500 | 0.4974 | 0.1461 |
| 0.0425 | 78.75 | 73000 | 0.4926 | 0.1453 |
| 0.0431 | 79.29 | 73500 | 0.4925 | 0.1456 |
| 0.0362 | 79.83 | 74000 | 0.4929 | 0.1455 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 1.18.5.dev0
- Tokenizers 0.11.6
|
anton-l/xtreme_s_xlsr_minds14_longer | 943ba28a1fde067525b14b8751b41e012afa2269 | 2022-03-13T14:36:40.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers"
]
| audio-classification | false | anton-l | null | anton-l/xtreme_s_xlsr_minds14_longer | 6 | null | transformers | 15,485 | Entry not found |
bettertextapp/tai-byt5-small-de-correct-train | a6435150f887858d32e2ac5ef67c3280cafe70dd | 2022-03-13T21:09:11.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | bettertextapp | null | bettertextapp/tai-byt5-small-de-correct-train | 6 | null | transformers | 15,486 | Entry not found |
T-qualizer/distilbert-base-uncased-finetuned-advers | 27a1d890b4820de3eeafdd1fd2b7d4bb75852d1e | 2022-03-14T23:25:09.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:adversarial_qa",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | T-qualizer | null | T-qualizer/distilbert-base-uncased-finetuned-advers | 6 | null | transformers | 15,487 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- adversarial_qa
model-index:
- name: distilbert-base-uncased-finetuned-advers
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-advers
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the adversarial_qa dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6424 | 0.18 | 3000 | 3.6462 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
wypoon/distilbert-base-uncased-finetuned-emotion | 325f9c437a9ca9b9eb50c3c4d37a13572f57ff53 | 2022-03-15T00:45:17.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | wypoon | null | wypoon/distilbert-base-uncased-finetuned-emotion | 6 | null | transformers | 15,488 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.919
- name: F1
type: f1
value: 0.919270748741723
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2243
- Accuracy: 0.919
- F1: 0.9193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.833 | 1.0 | 250 | 0.3188 | 0.9015 | 0.8975 |
| 0.2513 | 2.0 | 500 | 0.2243 | 0.919 | 0.9193 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
tareknaous/dialogpt-daily-dialog | b0032a62d3f2544742abbb4dd3162dce48dbb5d9 | 2022-03-14T09:18:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | tareknaous | null | tareknaous/dialogpt-daily-dialog | 6 | null | transformers | 15,489 | Entry not found |
mjc00/distilbert-base-uncased-finetuned-emotion | 2f1a0dce6788703ac0d746aa4d090bf09dc057e8 | 2022-03-15T05:48:00.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | mjc00 | null | mjc00/distilbert-base-uncased-finetuned-emotion | 6 | null | transformers | 15,490 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.924132235882821
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2153
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7986 | 1.0 | 250 | 0.3021 | 0.91 | 0.9078 |
| 0.2386 | 2.0 | 500 | 0.2153 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
cambridgeltl/sst_electra_small | 8cc6faecac14d33d18e9c90945a0c7d651abf80c | 2022-03-15T11:32:37.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | cambridgeltl | null | cambridgeltl/sst_electra_small | 6 | null | transformers | 15,491 | Entry not found |
pritamdeka/BioBert-PubMed200kRCT | feb24358ce21ea5ffbf4a13b96cd6e971333d365 | 2022-07-27T21:35:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | pritamdeka | null | pritamdeka/BioBert-PubMed200kRCT | 6 | null | transformers | 15,492 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
widget:
- text: "SAMPLE 32,441 archived appendix samples fixed in formalin and embedded in paraffin and tested for the presence of abnormal prion protein (PrP)."
model-index:
- name: BioBert-PubMed200kRCT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioBert-PubMed200kRCT
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) on the [PubMed200kRCT](https://github.com/Franck-Dernoncourt/pubmed-rct/tree/master/PubMed_200k_RCT) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2832
- Accuracy: 0.8934
## Model description
More information needed
## Intended uses & limitations
The model can be used for text classification tasks of Randomized Controlled Trials that does not have any structure. The text can be classified as one of the following:
* BACKGROUND
* CONCLUSIONS
* METHODS
* OBJECTIVE
* RESULTS
The model can be directly used like this:
```python
from transformers import TextClassificationPipeline
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("pritamdeka/BioBert-PubMed200kRCT")
tokenizer = AutoTokenizer.from_pretrained("pritamdeka/BioBert-PubMed200kRCT")
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True)
pipe("Treatment of 12 healthy female subjects with CDCA for 2 days resulted in increased BAT activity.")
```
Results will be shown as follows:
```python
[[{'label': 'BACKGROUND', 'score': 0.0027583304326981306},
{'label': 'CONCLUSIONS', 'score': 0.044541116803884506},
{'label': 'METHODS', 'score': 0.19493348896503448},
{'label': 'OBJECTIVE', 'score': 0.003996663726866245},
{'label': 'RESULTS', 'score': 0.7537703514099121}]]
```
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3587 | 0.14 | 5000 | 0.3137 | 0.8834 |
| 0.3318 | 0.29 | 10000 | 0.3100 | 0.8831 |
| 0.3286 | 0.43 | 15000 | 0.3033 | 0.8864 |
| 0.3236 | 0.58 | 20000 | 0.3037 | 0.8862 |
| 0.3182 | 0.72 | 25000 | 0.2939 | 0.8876 |
| 0.3129 | 0.87 | 30000 | 0.2910 | 0.8885 |
| 0.3078 | 1.01 | 35000 | 0.2914 | 0.8887 |
| 0.2791 | 1.16 | 40000 | 0.2975 | 0.8874 |
| 0.2723 | 1.3 | 45000 | 0.2913 | 0.8906 |
| 0.2724 | 1.45 | 50000 | 0.2879 | 0.8904 |
| 0.27 | 1.59 | 55000 | 0.2874 | 0.8911 |
| 0.2681 | 1.74 | 60000 | 0.2848 | 0.8928 |
| 0.2672 | 1.88 | 65000 | 0.2832 | 0.8934 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
cambridgeltl/sst_electra_base | c0f64169b65d0dc7d8e885c6ba111da69f6d6df4 | 2022-03-15T15:45:07.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | cambridgeltl | null | cambridgeltl/sst_electra_base | 6 | null | transformers | 15,493 | Entry not found |
MrAnderson/nystrom-4096-full-trivia-copied-embeddings | 110fccda1f37fe60d01bd3dc5cb36bc4301a0526 | 2022-03-15T23:19:12.000Z | [
"pytorch",
"nystromformer",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | MrAnderson | null | MrAnderson/nystrom-4096-full-trivia-copied-embeddings | 6 | null | transformers | 15,494 | Entry not found |
facebook/regnet-x-004 | 8cd1eb19449b5ed35111f8ae9de7984086739fcf | 2022-06-30T10:14:47.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
]
| image-classification | false | facebook | null | facebook/regnet-x-004 | 6 | null | transformers | 15,495 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
dorltcheng/CXR_BioClinicalBERT_v1 | 89672de09bab87266a6ff3271d16fce8aa83bd39 | 2022-03-16T03:07:06.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | dorltcheng | null | dorltcheng/CXR_BioClinicalBERT_v1 | 6 | null | transformers | 15,496 | Entry not found |
MrAnderson/yoso-4096-full-trivia | 5a5eac1aa327726c6eb22583ee6b17b034594bdf | 2022-03-16T13:53:02.000Z | [
"pytorch",
"yoso",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | MrAnderson | null | MrAnderson/yoso-4096-full-trivia | 6 | null | transformers | 15,497 | Entry not found |
edbeeching/decision-transformer-gym-halfcheetah-medium | bb89518aa176be7e778249a64e0b565a0e488bf5 | 2022-06-29T19:20:49.000Z | [
"pytorch",
"decision_transformer",
"feature-extraction",
"arxiv:2106.01345",
"transformers",
"deep-reinforcement-learning",
"reinforcement-learning",
"decision-transformer",
"gym-continous-control"
]
| reinforcement-learning | false | edbeeching | null | edbeeching/decision-transformer-gym-halfcheetah-medium | 6 | null | transformers | 15,498 | ---
tags:
- deep-reinforcement-learning
- reinforcement-learning
- decision-transformer
- gym-continous-control
pipeline_tag: reinforcement-learning
---
# Decision Transformer model trained on medium trajectories sampled from the Gym HalfCheetah environment
This is a trained [Decision Transformer](https://arxiv.org/abs/2106.01345) model trained on medium trajectories sampled from the Gym HalfCheetah environment.
The following normlization coeficients are required to use this model:
mean = [-0.06845774, 0.01641455, -0.18354906, -0.27624607, -0.34061527, -0.09339716, -0.21321271, -0.08774239, 5.1730075, -0.04275195, -0.03610836, 0.14053793, 0.06049833, 0.09550975, 0.067391, 0.00562739, 0.01338279]
std = [0.07472999, 0.30234998, 0.3020731, 0.34417078, 0.17619242, 0.5072056, 0.25670078, 0.32948127, 1.2574149, 0.7600542, 1.9800916, 6.5653625, 7.4663677, 4.472223, 10.566964, 5.6719327, 7.498259]
See our [Blog Post](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing), [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) or [Example Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/decision_transformer) for usage.
|
edbeeching/decision-transformer-gym-walker2d-medium-replay | 4cbf4a12f78fa8621efff343df971882ebe20a44 | 2022-06-29T19:22:05.000Z | [
"pytorch",
"decision_transformer",
"feature-extraction",
"arxiv:2106.01345",
"transformers",
"deep-reinforcement-learning",
"reinforcement-learning",
"decision-transformer",
"gym-continous-control"
]
| reinforcement-learning | false | edbeeching | null | edbeeching/decision-transformer-gym-walker2d-medium-replay | 6 | null | transformers | 15,499 | ---
tags:
- deep-reinforcement-learning
- reinforcement-learning
- decision-transformer
- gym-continous-control
pipeline_tag: reinforcement-learning
---
# Decision Transformer model trained on medium-replay trajectories sampled from the Gym Walker2d environment
This is a trained [Decision Transformer](https://arxiv.org/abs/2106.01345) model trained on medium-replay trajectories sampled from the Gym Walker2d environment.
The following normlization coeficients are required to use this model:
mean = [1.2093647, 0.13264023, -0.14371201, -0.20465161, 0.55776125, -0.03231537, -0.2784661, 0.19130707, 1.4701707, -0.12504704, 0.05649531, -0.09991033, -0.34034026, 0.03546293, -0.08934259, -0.2992438, -0.5984178 ]
std = [0.11929835, 0.3562574, 0.258522, 0.42075422, 0.5202291, 0.15685083, 0.3677098, 0.7161388, 1.3763766, 0.8632222, 2.6364644, 3.0134118, 3.720684, 4.867284, 2.6681626, 3.845187, 5.47683867]
See our [Blog Post](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing), [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) or [Example Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/decision_transformer) for usage.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.