modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ScyKindness/Hatsune_Miku | d1b6c8860d16dbb2acd842a1402567c8ea59aeff | 2022-04-19T13:27:18.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"arxiv:1911.00536",
"transformers",
"conversational",
"license:mit"
] | conversational | false | ScyKindness | null | ScyKindness/Hatsune_Miku | 31 | null | transformers | 7,100 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
## A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)
DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations.
The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.
The model is trained on 147M multi-turn dialogue from Reddit discussion thread.
* Multi-turn generation examples from an interactive environment:
|Role | Response |
|---------|--------|
|User | Does money buy happiness? |
| Bot | Depends how much money you spend on it .|
|User | What is the best way to buy happiness ? |
| Bot | You just have to be a millionaire by your early 20s, then you can be happy . |
|User |This is so difficult ! |
| Bot | You have no idea how hard it is to be a millionaire and happy . There is a reason the rich have a lot of money |
Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
ArXiv paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536)
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-small")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-small")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
shishirAI/wav2vec2-xlsr-nepali | dc4e991478b645ddf7be14bbc2819926adb058cf | 2022-04-18T16:35:18.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | shishirAI | null | shishirAI/wav2vec2-xlsr-nepali | 31 | null | transformers | 7,101 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-nepali
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-nepali
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Intel/electra-small-discriminator-mrpc-int8-static | cd77fc803a357eb6d0e96cb232553ebb3cafc546 | 2022-06-10T02:42:51.000Z | [
"pytorch",
"electra",
"text-classification",
"en",
"dataset:glue",
"transformers",
"text-classfication",
"int8",
"Intel® Neural Compressor",
"PostTrainingStatic",
"license:mit",
"model-index"
] | text-classification | false | Intel | null | Intel/electra-small-discriminator-mrpc-int8-static | 31 | null | transformers | 7,102 | ---
language:
- en
license: mit
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingStatic
datasets:
- glue
metrics:
- f1
model-index:
- name: electra-small-discriminator-mrpc-int8-static
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: F1
type: f1
value: 0.900709219858156
---
# INT8 electra-small-discriminator-mrpc
### Post-training static quantization
This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [electra-small-discriminator-mrpc](https://huggingface.co/Intel/electra-small-discriminator-mrpc).
The calibration dataloader is the train dataloader. The default calibration sampling size 300 isn't divisible exactly by batch size 8, so
the real sampling size is 304.
### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.9007|0.8983|
| **Model size (MB)** |14|51.8|
### Load with Intel® Neural Compressor:
```python
from neural_compressor.utils.load_huggingface import OptimizedModel
int8_model = OptimizedModel.from_pretrained(
'Intel/electra-small-discriminator-mrpc-int8-static',
)
```
|
maximedb/reviews-generator | 5e2d6eb7650eb4bdba82ee8a5c4874ab14dc6847 | 2022-04-25T19:15:12.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | maximedb | null | maximedb/reviews-generator | 31 | null | transformers | 7,103 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: reviews-generator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reviews-generator
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3020
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7284 | 0.16 | 500 | 3.5020 |
| 3.6202 | 0.32 | 1000 | 3.4170 |
| 3.5477 | 0.48 | 1500 | 3.3667 |
| 3.5218 | 0.64 | 2000 | 3.3395 |
| 3.5097 | 0.8 | 2500 | 3.3167 |
| 3.5009 | 0.96 | 3000 | 3.3020 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.1
- Datasets 1.18.4
- Tokenizers 0.11.0
|
anton-l/xtreme_s_xlsr_300m_voxpopuli_en | ee39996874355b2efbd63b2f8b32744de9e310af | 2022-05-03T09:55:15.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:google/xtreme_s",
"transformers",
"voxpopuli",
"google/xtreme_s",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anton-l | null | anton-l/xtreme_s_xlsr_300m_voxpopuli_en | 31 | null | transformers | 7,104 | ---
language:
- en
license: apache-2.0
tags:
- voxpopuli
- google/xtreme_s
- generated_from_trainer
datasets:
- google/xtreme_s
model-index:
- name: xtreme_s_xlsr_300m_voxpopuli_en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_300m_voxpopuli_en
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - VOXPOPULI.EN dataset.
It achieves the following results on the evaluation set:
- Cer: 0.0966
- Loss: 0.3127
- Wer: 0.1549
- Predict Samples: 1842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 1.4221 | 0.19 | 500 | 1.3325 | 0.8224 | 0.3432 |
| 0.8429 | 0.38 | 1000 | 0.7087 | 0.5028 | 0.2023 |
| 0.7377 | 0.57 | 1500 | 0.4900 | 0.2778 | 0.1339 |
| 0.5641 | 0.77 | 2000 | 0.4460 | 0.2540 | 0.1284 |
| 0.5787 | 0.96 | 2500 | 0.4242 | 0.2148 | 0.1167 |
| 0.3465 | 1.15 | 3000 | 0.4210 | 0.2087 | 0.1154 |
| 0.2787 | 1.34 | 3500 | 0.3954 | 0.2090 | 0.1155 |
| 0.2775 | 1.53 | 4000 | 0.3938 | 0.1992 | 0.1133 |
| 0.262 | 1.72 | 4500 | 0.3748 | 0.2104 | 0.1151 |
| 0.3138 | 1.92 | 5000 | 0.3825 | 0.1993 | 0.1134 |
| 0.4331 | 2.11 | 5500 | 0.3648 | 0.1935 | 0.1104 |
| 0.3802 | 2.3 | 6000 | 0.3966 | 0.1910 | 0.1109 |
| 0.3928 | 2.49 | 6500 | 0.3995 | 0.1898 | 0.1100 |
| 0.3441 | 2.68 | 7000 | 0.3764 | 0.1887 | 0.1103 |
| 0.3673 | 2.87 | 7500 | 0.3800 | 0.1843 | 0.1086 |
| 0.3422 | 3.07 | 8000 | 0.3932 | 0.1830 | 0.1092 |
| 0.2933 | 3.26 | 8500 | 0.3672 | 0.1915 | 0.1104 |
| 0.1785 | 3.45 | 9000 | 0.3820 | 0.1796 | 0.1072 |
| 0.321 | 3.64 | 9500 | 0.3533 | 0.1994 | 0.1126 |
| 0.1673 | 3.83 | 10000 | 0.3683 | 0.1856 | 0.1084 |
| 0.1757 | 4.02 | 10500 | 0.3365 | 0.1925 | 0.1102 |
| 0.1881 | 4.22 | 11000 | 0.3528 | 0.1775 | 0.1066 |
| 0.3106 | 4.41 | 11500 | 0.3909 | 0.1754 | 0.1063 |
| 0.25 | 4.6 | 12000 | 0.3734 | 0.1723 | 0.1052 |
| 0.2005 | 4.79 | 12500 | 0.3358 | 0.1900 | 0.1092 |
| 0.2982 | 4.98 | 13000 | 0.3513 | 0.1766 | 0.1060 |
| 0.1552 | 5.17 | 13500 | 0.3720 | 0.1729 | 0.1059 |
| 0.1645 | 5.37 | 14000 | 0.3569 | 0.1713 | 0.1044 |
| 0.2065 | 5.56 | 14500 | 0.3639 | 0.1720 | 0.1048 |
| 0.1898 | 5.75 | 15000 | 0.3660 | 0.1726 | 0.1050 |
| 0.1397 | 5.94 | 15500 | 0.3731 | 0.1670 | 0.1033 |
| 0.2056 | 6.13 | 16000 | 0.3782 | 0.1650 | 0.1030 |
| 0.1859 | 6.32 | 16500 | 0.3903 | 0.1667 | 0.1033 |
| 0.1374 | 6.52 | 17000 | 0.3721 | 0.1736 | 0.1048 |
| 0.2482 | 6.71 | 17500 | 0.3899 | 0.1643 | 0.1023 |
| 0.159 | 6.9 | 18000 | 0.3847 | 0.1687 | 0.1032 |
| 0.1487 | 7.09 | 18500 | 0.3817 | 0.1671 | 0.1030 |
| 0.1942 | 7.28 | 19000 | 0.4120 | 0.1616 | 0.1018 |
| 0.1517 | 7.47 | 19500 | 0.3856 | 0.1635 | 0.1020 |
| 0.0946 | 7.67 | 20000 | 0.3838 | 0.1621 | 0.1016 |
| 0.1455 | 7.86 | 20500 | 0.3749 | 0.1652 | 0.1020 |
| 0.1303 | 8.05 | 21000 | 0.4074 | 0.1615 | 0.1011 |
| 0.1207 | 8.24 | 21500 | 0.4121 | 0.1606 | 0.1008 |
| 0.0727 | 8.43 | 22000 | 0.3948 | 0.1607 | 0.1009 |
| 0.1123 | 8.62 | 22500 | 0.4025 | 0.1603 | 0.1009 |
| 0.1606 | 8.82 | 23000 | 0.3963 | 0.1580 | 0.1004 |
| 0.1458 | 9.01 | 23500 | 0.3991 | 0.1574 | 0.1002 |
| 0.2286 | 9.2 | 24000 | 0.4149 | 0.1596 | 0.1009 |
| 0.1284 | 9.39 | 24500 | 0.4251 | 0.1572 | 0.1002 |
| 0.1141 | 9.58 | 25000 | 0.4264 | 0.1579 | 0.1002 |
| 0.1823 | 9.77 | 25500 | 0.4230 | 0.1562 | 0.0999 |
| 0.2514 | 9.97 | 26000 | 0.4242 | 0.1564 | 0.0999 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.1+cu111
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
|
avichr/Legal-heBERT_ft | 0f00d4cc4098e5fcb800af91beeb7473c7c5686f | 2022-07-07T07:31:58.000Z | [
"pytorch",
"bert",
"fill-mask",
"arxiv:1911.03090",
"arxiv:2010.02559",
"transformers",
"autotrain_compatible"
] | fill-mask | false | avichr | null | avichr/Legal-heBERT_ft | 31 | 1 | transformers | 7,105 | # Legal-HeBERT
Legal-HeBERT is a BERT model for Hebrew legal and legislative domains. It is intended to improve the legal NLP research and tools development in Hebrew. We release two versions of Legal-HeBERT. The first version is a fine-tuned model of [HeBERT](https://github.com/avichaychriqui/HeBERT) applied on legal and legislative documents. The second version uses [HeBERT](https://github.com/avichaychriqui/HeBERT)'s architecture guidlines to train a BERT model from scratch. <br>
We continue collecting legal data, examining different architectural designs, and performing tagged datasets and legal tasks for evaluating and to development of a Hebrew legal tools.
## Training Data
Our training datasets are:
| Name | Hebrew Description | Size (GB) | Documents | Sentences | Words | Notes |
|----------------------------------------------------------------------------------------------------------------------------------- |-------------------------------------------------------------------------- |----------- |----------- |------------ |------------- |----------------------------------------- |
| The Israeli Law Book | ספר החוקים הישראלי | 0.05 | 2338 | 293352 | 4851063 | |
| Judgments of the Supreme Court | מאגר פסקי הדין של בית המשפט העליון | 0.7 | 212348 | 5790138 | 79672415 | |
| custody courts | החלטות בתי הדין למשמורת | 2.46 | 169,708 | 8,555,893 | 213,050,492 | |
| Law memoranda, drafts of secondary legislation and drafts of support tests that have been distributed to the public for comment | תזכירי חוק, טיוטות חקיקת משנה וטיוטות מבחני תמיכה שהופצו להערות הציבור | 0.4 | 3,291 | 294,752 | 7,218,960 | |
| Supervisors of Land Registration judgments | מאגר פסקי דין של המפקחים על רישום המקרקעין | 0.02 | 559 | 67,639 | 1,785,446 | |
| Decisions of the Labor Court - Corona | מאגר החלטות בית הדין לעניין שירות התעסוקה – קורונה | 0.001 | 146 | 3505 | 60195 | |
| Decisions of the Israel Lands Council | החלטות מועצת מקרקעי ישראל | | 118 | 11283 | 162692 | aggregate file |
| Judgments of the Disciplinary Tribunal and the Israel Police Appeals Tribunal | פסקי דין של בית הדין למשמעת ובית הדין לערעורים של משטרת ישראל | 0.02 | 54 | 83724 | 1743419 | aggregate files |
| Disciplinary Appeals Committee in the Ministry of Health | ועדת ערר לדין משמעתי במשרד הבריאות | 0.004 | 252 | 21010 | 429807 | 465 files are scanned and didn't parser |
| Attorney General's Positions | מאגר התייצבויות היועץ המשפטי לממשלה | 0.008 | 281 | 32724 | 813877 | |
| Legal-Opinion of the Attorney General | מאגר חוות דעת היועץ המשפטי לממשלה | 0.002 | 44 | 7132 | 188053 | |
| | | | | | | |
| total | | 3.665 | 389,139 | 15,161,152 | 309,976,419 | |
We thank <b>Yair Gardin</b> for the referring to the governance data, <b>Elhanan Schwarts</b> for collecting and parsing The Israeli law book, and <b>Jonathan Schler</b> for collecting the judgments of the supreme court.
## Training process
* Vocabulary size: 50,000 tokens
* 4 epochs (1M steps±)
* lr=5e-5
* mlm_probability=0.15
* batch size = 32 (for each gpu)
* NVIDIA GeForce RTX 2080 TI + NVIDIA GeForce RTX 3090 (1 week training)
### Additional training settings:
<b>Fine-tuned [HeBERT](https://github.com/avichaychriqui/HeBERT) model:</b> The first eight layers were freezed (like [Lee et al. (2019)](https://arxiv.org/abs/1911.03090) suggest)<br>
<b>Legal-HeBERT trained from scratch:</b> The training process is similar to [HeBERT](https://github.com/avichaychriqui/HeBERT) and inspired by [Chalkidis et al. (2020)](https://arxiv.org/abs/2010.02559) <br>
## How to use
The models can be found in huggingface hub and can be fine-tunned to any down-stream task:
```
# !pip install transformers==4.14.1
from transformers import AutoTokenizer, AutoModel
model_name = 'avichr/Legal-heBERT_ft' # for the fine-tuned HeBERT model
model_name = 'avichr/Legal-heBERT' # for legal HeBERT model trained from scratch
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model=model_name,
)
fill_mask("הקורונה לקחה את [MASK] ולנו לא נשאר דבר.")
```
## Stay tuned!
We are still working on our models and the datasets. We will edit this page as we progress. We are open for collaborations.
## If you used this model please cite us as :
Chriqui, Avihay, Yahav, Inbal and Bar-Siman-Tov, Ittai, Legal HeBERT: A BERT-based NLP Model for Hebrew Legal, Judicial and Legislative Texts (June 27, 2022). Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4147127
```
@article{chriqui2021hebert,
title={Legal HeBERT: A BERT-based NLP Model for Hebrew Legal, Judicial and Legislative Texts},
author={Chriqui, Avihay, Yahav, Inbal and Bar-Siman-Tov, Ittai},
journal={SSRN preprint:4147127},
year={2022}
}
```
## Contact us
[Avichay Chriqui](mailto:[email protected]), The Coller AI Lab <br>
[Inbal yahav](mailto:[email protected]), The Coller AI Lab <br>
[Ittai Bar-Siman-Tov](mailto:[email protected]), the BIU Innovation Lab for Law, Data-Science and Digital Ethics <br>
Thank you, תודה, شكرا <br>
|
kabelomalapane/En-Tn | 58250ed6164cc2202c45764b846fb916a3f56e52 | 2022-06-02T07:03:01.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | kabelomalapane | null | kabelomalapane/En-Tn | 31 | null | transformers | 7,106 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: En-Tn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# En-Tn
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-tn](https://huggingface.co/Helsinki-NLP/opus-mt-en-tn) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6174
- Bleu: 32.2889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ziq/depression_tweet | 242c1526e6b6c3d2be7314281b511bcf4f7d968e | 2022-06-06T09:09:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | ziq | null | ziq/depression_tweet | 31 | null | transformers | 7,107 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: depression_tweet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# depression_tweet
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1606
- Accuracy: 0.9565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 216 | 0.1369 | 0.9497 |
| No log | 2.0 | 432 | 0.1588 | 0.9552 |
| 0.0514 | 3.0 | 648 | 0.1647 | 0.9562 |
| 0.0514 | 4.0 | 864 | 0.1606 | 0.9565 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
jboomc/rotten_tomatoes_finetuned | 74ee8d0e562bf0992f6a7d98cc75ec977095e2d5 | 2022-06-08T16:16:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | jboomc | null | jboomc/rotten_tomatoes_finetuned | 31 | null | transformers | 7,108 | Entry not found |
ml6team/keyphrase-extraction-kbir-kptimes | baceaac4c0e2eba23d6307d3a2dce69fbbdfd2b9 | 2022-06-16T18:22:00.000Z | [
"pytorch",
"roberta",
"token-classification",
"en",
"dataset:midas/kptimes",
"arxiv:2112.08547",
"arxiv:1911.12559",
"transformers",
"keyphrase-extraction",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | ml6team | null | ml6team/keyphrase-extraction-kbir-kptimes | 31 | null | transformers | 7,109 | ---
language: en
license: mit
tags:
- keyphrase-extraction
datasets:
- midas/kptimes
metrics:
- seqeval
widget:
- text: "Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document.
Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading
it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail
and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents,
this process can take a lot of time.
Here is where Artificial Intelligence comes in. Currently, classical machine learning methods, that use statistical
and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture
the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency,
occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies
and context of words in a text."
example_title: "Example 1"
- text: "FoodEx is the largest trade exhibition for food and drinks in Asia, with about 70,000 visitors checking out the products presented by hundreds of participating companies. I was lucky to enter as press; otherwise, visitors must be affiliated with the food industry— and pay ¥5,000 — to enter. The FoodEx menu is global, including everything from cherry beer from Germany and premium Mexican tequila to top-class French and Chinese dumplings. The event was a rare chance to try out both well-known and exotic foods and even see professionals making them. In addition to booths offering traditional Japanese favorites such as udon and maguro sashimi, there were plenty of innovative twists, such as dorayaki , a sweet snack made of two pancakes and a red-bean filling, that came in coffee and tomato flavors. While I was there I was lucky to catch the World Sushi Cup Japan 2013, where top chefs from around the world were competing … and presenting a wide range of styles that you would not normally see in Japan, like the flower makizushi above."
example_title: "Example 2"
model-index:
- name: ml6team/keyphrase-extraction-distilbert-kptimes
results:
- task:
type: keyphrase-extraction
name: Keyphrase Extraction
dataset:
type: midas/kptimes
name: kptimes
metrics:
- type: F1 (Seqeval)
value: 0.000
name: F1 (Seqeval)
- type: F1@M
value: 0.331
name: F1@M
---
# 🔑 Keyphrase Extraction Model: KBIR-KPTimes
Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time ⏳.
Here is where Artificial Intelligence 🤖 comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text.
## 📓 Model Description
This model uses [KBIR](https://huggingface.co/bloomberg/KBIR) as its base model and fine-tunes it on the [KPTimes dataset](https://huggingface.co/datasets/midas/kptimes). KBIR or Keyphrase Boundary Infilling with Replacement is a pre-trained model which utilizes a multi-task learning setup for optimizing a combined loss of Masked Language Modeling (MLM), Keyphrase Boundary Infilling (KBI) and Keyphrase Replacement Classification (KRC).
You can find more information about the architecture in this [paper](https://arxiv.org/abs/2112.08547).
Keyphrase extraction models are transformer models fine-tuned as a token classification problem where each word in the document is classified as being part of a keyphrase or not.
| Label | Description |
| ----- | ------------------------------- |
| B-KEY | At the beginning of a keyphrase |
| I-KEY | Inside a keyphrase |
| O | Outside a keyphrase |
## ✋ Intended Uses & Limitations
### 🛑 Limitations
* This keyphrase extraction model is very domain-specific and will perform very well on news articles from NY Times. It's not recommended to use this model for other domains, but you are free to test it out.
* Limited amount of predicted keyphrases.
* Only works for English documents.
* For a custom model, please consult the [training notebook]() for more information.
### ❓ How To Use
```python
from transformers import (
TokenClassificationPipeline,
AutoModelForTokenClassification,
AutoTokenizer,
)
from transformers.pipelines import AggregationStrategy
import numpy as np
# Define keyphrase extraction pipeline
class KeyphraseExtractionPipeline(TokenClassificationPipeline):
def __init__(self, model, *args, **kwargs):
super().__init__(
model=AutoModelForTokenClassification.from_pretrained(model),
tokenizer=AutoTokenizer.from_pretrained(model),
*args,
**kwargs
)
def postprocess(self, model_outputs):
results = super().postprocess(
model_outputs=model_outputs,
aggregation_strategy=AggregationStrategy.SIMPLE,
)
return np.unique([result.get("word").strip() for result in results])
```
```python
# Load pipeline
model_name = "ml6team/keyphrase-extraction-kbir-kptimes"
extractor = KeyphraseExtractionPipeline(model=model_name)
```
```python
# Inference
text = """
Keyphrase extraction is a technique in text analysis where you extract the
important keyphrases from a document. Thanks to these keyphrases humans can
understand the content of a text very quickly and easily without reading it
completely. Keyphrase extraction was first done primarily by human annotators,
who read the text in detail and then wrote down the most important keyphrases.
The disadvantage is that if you work with a lot of documents, this process
can take a lot of time.
Here is where Artificial Intelligence comes in. Currently, classical machine
learning methods, that use statistical and linguistic features, are widely used
for the extraction process. Now with deep learning, it is possible to capture
the semantic meaning of a text even better than these classical methods.
Classical methods look at the frequency, occurrence and order of words
in the text, whereas these neural approaches can capture long-term
semantic dependencies and context of words in a text.
""".replace("\n", " ")
keyphrases = extractor(text)
print(keyphrases)
```
```
# Output
['artificial intelligence']
```
## 📚 Training Dataset
[KPTimes](https://huggingface.co/datasets/midas/kptimes) is a keyphrase extraction/generation dataset consisting of 279,923 news articles from NY Times and 10K from JPTimes and annotated by professional indexers or editors.
You can find more information in the [paper](https://arxiv.org/abs/1911.12559).
## 👷♂️ Training procedure
For more in detail information, you can take a look at the [training notebook]().
### Training parameters
| Parameter | Value |
| --------- | ------|
| Learning Rate | 1e-4 |
| Epochs | 50 |
| Early Stopping Patience | 3 |
### Preprocessing
The documents in the dataset are already preprocessed into list of words with the corresponding labels. The only thing that must be done is tokenization and the realignment of the labels so that they correspond with the right subword tokens.
```python
from datasets import load_dataset
from transformers import AutoTokenizer
# Labels
label_list = ["B", "I", "O"]
lbl2idx = {"B": 0, "I": 1, "O": 2}
idx2label = {0: "B", 1: "I", 2: "O"}
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained("bloomberg/KBIR")
max_length = 512
# Dataset parameters
dataset_full_name = "midas/kptimes"
dataset_subset = "raw"
dataset_document_column = "document"
dataset_biotags_column = "doc_bio_tags"
def preprocess_fuction(all_samples_per_split):
tokenized_samples = tokenizer.batch_encode_plus(
all_samples_per_split[dataset_document_column],
padding="max_length",
truncation=True,
is_split_into_words=True,
max_length=max_length,
)
total_adjusted_labels = []
for k in range(0, len(tokenized_samples["input_ids"])):
prev_wid = -1
word_ids_list = tokenized_samples.word_ids(batch_index=k)
existing_label_ids = all_samples_per_split[dataset_biotags_column][k]
i = -1
adjusted_label_ids = []
for wid in word_ids_list:
if wid is None:
adjusted_label_ids.append(lbl2idx["O"])
elif wid != prev_wid:
i = i + 1
adjusted_label_ids.append(lbl2idx[existing_label_ids[i]])
prev_wid = wid
else:
adjusted_label_ids.append(
lbl2idx[
f"{'I' if existing_label_ids[i] == 'B' else existing_label_ids[i]}"
]
)
total_adjusted_labels.append(adjusted_label_ids)
tokenized_samples["labels"] = total_adjusted_labels
return tokenized_samples
# Load dataset
dataset = load_dataset(dataset_full_name, dataset_subset)
# Preprocess dataset
tokenized_dataset = dataset.map(preprocess_fuction, batched=True)
```
### Postprocessing (Without Pipeline Function)
If you do not use the pipeline function, you must filter out the B and I labeled tokens. Each B and I will then be merged into a keyphrase. Finally, you need to strip the keyphrases to make sure all unnecessary spaces have been removed.
```python
# Define post_process functions
def concat_tokens_by_tag(keyphrases):
keyphrase_tokens = []
for id, label in keyphrases:
if label == "B":
keyphrase_tokens.append([id])
elif label == "I":
if len(keyphrase_tokens) > 0:
keyphrase_tokens[len(keyphrase_tokens) - 1].append(id)
return keyphrase_tokens
def extract_keyphrases(example, predictions, tokenizer, index=0):
keyphrases_list = [
(id, idx2label[label])
for id, label in zip(
np.array(example["input_ids"]).squeeze().tolist(), predictions[index]
)
if idx2label[label] in ["B", "I"]
]
processed_keyphrases = concat_tokens_by_tag(keyphrases_list)
extracted_kps = tokenizer.batch_decode(
processed_keyphrases,
skip_special_tokens=True,
clean_up_tokenization_spaces=True,
)
return np.unique([kp.strip() for kp in extracted_kps])
```
## 📝 Evaluation Results
Traditional evaluation methods are the precision, recall and F1-score @k,m where k is the number that stands for the first k predicted keyphrases and m for the average amount of predicted keyphrases.
The model achieves the following results on the KPTimes test set:
| Dataset | P@5 | R@5 | F1@5 | P@10 | R@10 | F1@10 | P@M | R@M | F1@M |
|:-----------------:|:----:|:----:|:----:|:----:|:----:|:-----:|:----:|:----:|:----:|
| KPTimes Test Set | 0.19 | 0.35 | 0.23 | 0.10 | 0.36 | 0.15 | 0.36 | 0.36 | 0.33 |
For more information on the evaluation process, you can take a look at the keyphrase extraction [evaluation notebook]().
## 🚨 Issues
Please feel free to start discussions in the Community Tab. |
nickmuchi/yolos-base-finetuned-masks | 5b99479dde3098c15b2e09463a046a05e4fe5985 | 2022-06-20T00:01:11.000Z | [
"pytorch",
"yolos",
"object-detection",
"transformers"
] | object-detection | false | nickmuchi | null | nickmuchi/yolos-base-finetuned-masks | 31 | null | transformers | 7,110 | Entry not found |
wiselinjayajos/t5-end2end-questions-generation-squadV2 | f14d9ae2fb2b7a679e97900d7a813f1d3e1a8b07 | 2022-07-06T02:27:13.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | wiselinjayajos | null | wiselinjayajos/t5-end2end-questions-generation-squadV2 | 31 | null | transformers | 7,111 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-end2end-questions-generation-squadV2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation-squadV2
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
oussama/Layoutlm_Form_information_extraction | e9521ba8dda211ce890cb755e77a23a55e111714 | 2022-06-24T08:33:16.000Z | [
"pytorch",
"layoutlm",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | oussama | null | oussama/Layoutlm_Form_information_extraction | 31 | null | transformers | 7,112 | Entry not found |
Matthijs/mobilenet_v2_1.0_224 | ad6885df81ee0cd4a75e867a1dca518f21cfd516 | 2022-06-28T12:51:25.000Z | [
"pytorch",
"coreml",
"mobilenet_v2",
"dataset:imagenet-1k",
"arxiv:1801.04381",
"transformers",
"vision",
"image-classification",
"license:other"
] | image-classification | false | Matthijs | null | Matthijs/mobilenet_v2_1.0_224 | 31 | null | transformers | 7,113 | ---
license: other
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# MobileNet V2
MobileNet V2 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. It was first released in [this repository](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet).
Disclaimer: The team releasing MobileNet V2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md):
> MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature.
The checkpoints are named **mobilenet\_v2\_*depth*\_*size***, for example **mobilenet\_v2\_1.0\_224**, where **1.0** is the depth multiplier and **224** is the resolution of the input images the model was trained on.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import MobileNetV2FeatureExtractor, MobileNetV2ForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileNetV2FeatureExtractor.from_pretrained("Matthijs/mobilenet_v2_1.0_224")
model = MobileNetV2ForImageClassification.from_pretrained("Matthijs/mobilenet_v2_1.0_224")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Note: This model actually predicts 1001 classes, the 1000 classes from ImageNet plus an extra “background” class (index 0).
Currently, both the feature extractor and model support PyTorch.
### BibTeX entry and citation info
```bibtex
@inproceedings{mobilenetv22018,
title={MobileNetV2: Inverted Residuals and Linear Bottlenecks},
author={Mark Sandler and Andrew Howard and Menglong Zhu and Andrey Zhmoginov and Liang-Chieh Chen},
booktitle={CVPR},
year={2018}
}
```
|
nvidia/stt_fr_conformer_ctc_large | d4e37487087ab8e199eea40aaf0200ac40ab94d5 | 2022-06-30T20:01:15.000Z | [
"nemo",
"fr",
"dataset:multilingual_librispeech",
"dataset:mozilla-foundation/common_voice_7_0",
"dataset:VoxPopuli",
"arxiv:2005.08100",
"automatic-speech-recognition",
"speech",
"audio",
"CTC",
"Conformer",
"Transformer",
"pytorch",
"NeMo",
"hf-asr-leaderboard",
"Riva",
"license:cc-by-4.0",
"model-index"
] | automatic-speech-recognition | false | nvidia | null | nvidia/stt_fr_conformer_ctc_large | 31 | 2 | nemo | 7,114 | ---
language: fr
library_name: nemo
datasets:
- multilingual_librispeech
- mozilla-foundation/common_voice_7_0
- VoxPopuli
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- CTC
- Conformer
- Transformer
- pytorch
- NeMo
- hf-asr-leaderboard
- Riva
license: cc-by-4.0
model-index:
- name: stt_fr_conformer_ctc_large
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: MCV 7.0
type: mozilla-foundation/common_voice_7_0
config: fr
split: dev
args:
language: fr
metrics:
- name: Dev WER
type: wer
value: 8.35
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: MCV 7.0
type: mozilla-foundation/common_voice_7_0
config: fr
split: test
args:
language: fr
metrics:
- name: Test WER
type: wer
value: 9.63
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: Multilingual Librispeech
type: multilingual_librispeech
config: fr
split: dev
args:
language: fr
metrics:
- name: Dev WER
type: wer
value: 5.88
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: Multilingual Librispeech
type: multilingual_librispeech
config: fr
split: test
args:
language: fr
metrics:
- name: Test WER
type: wer
value: 4.91
---
# NVIDIA Conformer-CTC Large (fr)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)
| [](#deployment-with-nvidia-riva) |
This model was trained on a composite dataset comprising of over 1500 hours of French speech.
It is a non-autoregressive "large" variant of Conformer, with around 120 million parameters.
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-ctc) for complete architecture details.
It is also compatible with NVIDIA Riva for [production-grade server deployments](#deployment-with-nvidia-riva).
## Usage
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest PyTorch version.
```
pip install nemo_toolkit['all']
```
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("nvidia/stt_fr_conformer_ctc_large")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_fr_conformer_ctc_large"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 kHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
Conformer-CTC model is a non-autoregressive variant of Conformer model [1] for Automatic Speech Recognition which uses CTC loss/decoding instead of Transducer. You may find more info on the detail of this model here: [Conformer-CTC Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-ctc).
## Training
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_ctc/speech_to_text_ctc_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_ctc_bpe.yaml).
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
The checkpoint of the language model used for rescoring can be found [here]( https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/stt_fr_conformer_ctc_large). You may find more info on how to train and use language models for ASR models here: [ASR Language Modeling](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html)
## Datasets
All the models in this collection are trained on a composite dataset (NeMo ASRSET) comprising of over a thousand hours of French speech:
- MozillaCommonVoice 7.0 - 356 hours
- Multilingual LibriSpeech - 1036 hours
- VoxPopuli - 182 hours
Both models use same dataset, excluding a preprocessing step to strip hyphen from data for secondary model's training.
## Performance
The performance of Automatic Speech Recognition models is measuring using Word Error Rate. Since this dataset is trained on multiple domains and a much larger corpus, it will generally perform better at transcribing audio in general.
The latest model obtains the following greedy scores on the following evaluation datasets
- 8.35 % on MCV7.0 dev
- 9.63 % on MCV7.0 test
- 5.88 % on MLS dev
- 4.91 % on MLS test
With 128 beam search and 4gram KenLM model:
- 7.95 % on MCV7.0 dev
- 9.16 % on MCV7.0 test
- 5.57 % on MLS dev
- 4.66 % on MLS test
Note that these evaluation datasets have been filtered and preprocessed to only contain French alphabet characters and are removed of punctuation outside of hyphenation and apostrophe.
## Limitations
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
Further, since portions of the training set contain text from both pre- and post- 1990 orthographic reform, regularity of punctuation may vary between the two styles.
For downstream tasks requiring more consistency, finetuning or downstream processing may be required. If exact orthography is not necessary, then using secondary model is advised.
## Deployment with NVIDIA Riva
For the best real-time accuracy, latency, and throughput, deploy the model with [NVIDIA Riva](https://developer.nvidia.com/riva), an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, at the edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and Enterprise-grade support
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
- [1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100)
- [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
- [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
|
Matthijs/deeplabv3_mobilenet_v2_1.0_513 | c4897d738a13a29458817fc8f8936fb6a7b3ffcc | 2022-06-28T13:39:52.000Z | [
"pytorch",
"coreml",
"mobilenet_v2",
"dataset:pascal-voc",
"arxiv:1801.04381",
"arxiv:1802.02611",
"transformers",
"vision",
"image-segmentation",
"license:other"
] | image-segmentation | false | Matthijs | null | Matthijs/deeplabv3_mobilenet_v2_1.0_513 | 31 | null | transformers | 7,115 | ---
license: other
tags:
- vision
- image-segmentation
datasets:
- pascal-voc
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-2.jpg
example_title: Cat
---
# MobileNetV2 with DeepLabV3+
MobileNet V2 model pre-trained on PASCAL VOC at resolution 513x513. It was introduced in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. It was first released in [this repository](https://github.com/tensorflow/models/tree/master/research/deeplab).
Disclaimer: The team releasing MobileNet V2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md):
> MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature.
The model in this repo adds a [DeepLabV3+](https://arxiv.org/abs/1802.02611) head to the MobileNetV2 backbone for semantic segmentation.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MobileNetV2FeatureExtractor, MobileNetV2ForSemanticSegmentation
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileNetV2FeatureExtractor.from_pretrained("Matthijs/deeplabv3_mobilenet_v2_1.0_513")
model = MobileNetV2ForSemanticSegmentation.from_pretrained("Matthijs/deeplabv3_mobilenet_v2_1.0_513")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_mask = logits.argmax(1).squeeze(0)
```
Currently, both the feature extractor and model support PyTorch.
### BibTeX entry and citation info
```bibtex
@inproceedings{deeplabv3plus2018,
title={Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation},
author={Liang-Chieh Chen and Yukun Zhu and George Papandreou and Florian Schroff and Hartwig Adam},
booktitle={ECCV},
year={2018}
}
```
|
emilylearning/cond_ft_none_on_reddit__prcnt_100__test_run_False__roberta-base | 00dfd2e873cc709381e84c99471e685666add3e5 | 2022-07-01T03:07:41.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | emilylearning | null | emilylearning/cond_ft_none_on_reddit__prcnt_100__test_run_False__roberta-base | 31 | null | transformers | 7,116 | Entry not found |
Mimita6654/test | 936f107cd2e99ea098715e748dc4db9d8d95dc17 | 2022-07-02T07:47:08.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Mimita6654 | null | Mimita6654/test | 31 | null | transformers | 7,117 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Gerwin/legal-bert-dutch-english | 1204ca4dc23788999f63e602214bc0dd4c5e60b7 | 2022-07-21T09:18:09.000Z | [
"pytorch",
"tf",
"bert",
"feature-extraction",
"en",
"nl",
"transformers",
"legal",
"license:apache-2.0"
] | feature-extraction | false | Gerwin | null | Gerwin/legal-bert-dutch-english | 31 | null | transformers | 7,118 | ---
language:
- en
- nl
tags:
- bert
- legal
license: apache-2.0
metrics:
- F1
---
# Legal BERT model applicable for Dutch and English
A BERT model further trained from [mBERT](https://huggingface.co/bert-base-multilingual-uncased) on legal documents. The thesis can be downloaded [here](https://www.ru.nl/publish/pages/769526/gerwin_de_kruijf.pdf).
## Data
The model is further trained the same way as [EurlexBERT](https://huggingface.co/nlpaueb/bert-base-uncased-eurlex): regulations, decisions, directives, and parliamentary questions were acquired in both Dutch and English. A total of 184k documents, around 295M words, was used to further train the model. This is less than 9% the size of the original BERT model.
Further training was done for 60k steps, since it showed better results compared to a 100k checkpoint (which was suggested in the original BERT paper). Using more than 100k steps was not beneficial.
## How to use
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("Gerwin/legal-bert-dutch-english")
model = AutoModel.from_pretrained("Gerwin/legal-bert-dutch-english") # PyTorch
model = TFAutoModel.from_pretrained("Gerwin/legal-bert-dutch-english") # TensorFlow
```
## Benchmarks
Here are a couple of comparisons between popular BERT models and this model. The fine-tuning procedures for these benchmarks are identical for each pre-trained model, and are more explained in the thesis. You may be able to achieve higher scores for individual models by optimizing fine-tuning procedures. The table shows the weighted F1 scores.
### Legal topic classification
| Model | [Multi-EURLEX (NL)](https://huggingface.co/datasets/multi_eurlex) |
| ----------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------- |
| **legal-bert-dutch-english** | **0.786** |
| [mBERT](https://huggingface.co/bert-base-multilingual-uncased) | 0.779 |
| [BERTje](https://huggingface.co/GroNLP/bert-base-dutch-cased) | 0.775 |
| Model | [Multi-EURLEX (EN)](https://huggingface.co/datasets/multi_eurlex) |
| ----------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------- |
| **legal-bert-dutch-english** | 0.786 |
| [mBERT](https://huggingface.co/bert-base-multilingual-uncased) | 0.772 |
| [BERT](https://huggingface.co/bert-base-uncased) | 0.791 |
| [LegalBERT](https://huggingface.co/nlpaueb/legal-bert-base-uncased) | 0.791 |
| [EurlexBERT](https://huggingface.co/nlpaueb/bert-base-uncased-eurlex) | **0.795** |
### Multi-class classification (Rabobank)
This dataset is not open-source, but it is still an interesting case since the dataset contains both Dutch and English legal documents that have to be classified. The dataset consists of 8000 long legal documents (2000 Dutch & 6000 English) with a total of 30 classes. Using a combined architecture of a Dutch and English BERT model was not beneficial, since documents from both languages could belong to the same class.
| Model | Rabobank |
| ---------------------------------- | ---------------------------------- |
| **legal-bert-dutch-english** | **0.732** |
| [mBERT](https://huggingface.co/bert-base-multilingual-uncased) | 0.713 |
|
dingusagar/vit-base-avengers-v1 | 1a077fe19b4a2c013fd360f9f0dba187f7974b7a | 2022-07-10T10:47:03.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:imagefolder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | dingusagar | null | dingusagar/vit-base-avengers-v1 | 31 | null | transformers | 7,119 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-avengers-v1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
args: avengers-dataset
metrics:
- name: Accuracy
type: accuracy
value: 0.8683385579937304
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-avengers-v1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5324
- Accuracy: 0.8683
Refer to this [medium article](https://medium.com/@dingusagar/marvel-character-classification-by-fine-tuning-vision-transformer-45c14a7d8719) for more info on how it was trained.
## Limitations
Training was done on google images for these search terms each representing a class.
Iron Man,Captain America,Thor,Spider Man,Docter Strage,Black Panther,Ant Man,Captain Marvel,Hulk,Black Widow,Hawkeye Avengers,Scarlet Witch,Vision Avengers,Bucky Barnes,Falcon Avengers,Loki
Therefore it has seen more of images where these super heros are in their suit or superhero outfit.
For example an image of hulk is detected correctly, but an image of Bruce Banner is not simply because the model has't seen those images.
A little bit of data augmentation will help.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8183 | 1.27 | 100 | 1.0134 | 0.8464 |
| 0.2234 | 2.53 | 200 | 0.6146 | 0.8495 |
| 0.1206 | 3.8 | 300 | 0.5324 | 0.8683 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
nloc2578/new_ques2 | 9ba4f05a4474a41aa47b5403a1a151b003f72b62 | 2022-07-12T09:11:39.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | nloc2578 | null | nloc2578/new_ques2 | 31 | null | transformers | 7,120 | ---
tags:
- generated_from_trainer
model-index:
- name: new_ques2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# new_ques2
This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
davanstrien/clip-roberta-finetuned | aaf97d71276b9d75118a7e6ef392c73a6019922c | 2022-07-15T16:09:56.000Z | [
"pytorch",
"tensorboard",
"vision-text-dual-encoder",
"feature-extraction",
"dataset:davanstrien/manuscript_noisy_labels_iiif",
"transformers",
"generated_from_trainer",
"model-index"
] | feature-extraction | false | davanstrien | null | davanstrien/clip-roberta-finetuned | 31 | null | transformers | 7,121 | ---
tags:
- generated_from_trainer
datasets:
- davanstrien/manuscript_noisy_labels_iiif
model-index:
- name: clip-roberta-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-roberta-finetuned
This model is a fine-tuned version of [./clip-roberta](https://huggingface.co/./clip-roberta) on the davanstrien/manuscript_noisy_labels_iiif dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.9841 | 0.07 | 500 | 3.4112 |
| 2.72 | 0.15 | 1000 | 3.3430 |
| 2.6319 | 0.22 | 1500 | 3.2295 |
| 2.5781 | 0.29 | 2000 | 3.1645 |
| 2.5339 | 0.36 | 2500 | 3.1226 |
| 2.503 | 0.44 | 3000 | 3.0856 |
| 2.4581 | 0.51 | 3500 | 3.0639 |
| 2.4494 | 0.58 | 4000 | 3.0415 |
| 2.4275 | 0.65 | 4500 | 3.0245 |
| 2.3909 | 0.73 | 5000 | 2.9991 |
| 2.3902 | 0.8 | 5500 | 2.9931 |
| 2.3741 | 0.87 | 6000 | 2.9612 |
| 2.3536 | 0.95 | 6500 | 2.9509 |
| 2.3392 | 1.02 | 7000 | 2.9289 |
| 2.3083 | 1.09 | 7500 | 2.9214 |
| 2.3094 | 1.16 | 8000 | 2.9153 |
| 2.2864 | 1.24 | 8500 | 2.9034 |
| 2.2893 | 1.31 | 9000 | 2.8963 |
| 2.2697 | 1.38 | 9500 | 2.8847 |
| 2.2762 | 1.46 | 10000 | 2.8665 |
| 2.2667 | 1.53 | 10500 | 2.8536 |
| 2.2548 | 1.6 | 11000 | 2.8472 |
| 2.238 | 1.67 | 11500 | 2.8491 |
| 2.2423 | 1.75 | 12000 | 2.8257 |
| 2.2406 | 1.82 | 12500 | 2.8287 |
| 2.2248 | 1.89 | 13000 | 2.8193 |
| 2.223 | 1.96 | 13500 | 2.8101 |
| 2.1995 | 2.04 | 14000 | 2.8027 |
| 2.1834 | 2.11 | 14500 | 2.7880 |
| 2.1723 | 2.18 | 15000 | 2.7783 |
| 2.1651 | 2.26 | 15500 | 2.7739 |
| 2.1575 | 2.33 | 16000 | 2.7825 |
| 2.1598 | 2.4 | 16500 | 2.7660 |
| 2.1667 | 2.47 | 17000 | 2.7578 |
| 2.1565 | 2.55 | 17500 | 2.7580 |
| 2.1558 | 2.62 | 18000 | 2.7561 |
| 2.1642 | 2.69 | 18500 | 2.7512 |
| 2.1374 | 2.77 | 19000 | 2.7361 |
| 2.1402 | 2.84 | 19500 | 2.7385 |
| 2.1326 | 2.91 | 20000 | 2.7235 |
| 2.1272 | 2.98 | 20500 | 2.7183 |
| 2.0954 | 3.06 | 21000 | 2.7156 |
| 2.0842 | 3.13 | 21500 | 2.7065 |
| 2.0859 | 3.2 | 22000 | 2.7089 |
| 2.0856 | 3.27 | 22500 | 2.6962 |
| 2.0775 | 3.35 | 23000 | 2.6931 |
| 2.0821 | 3.42 | 23500 | 2.6933 |
| 2.0706 | 3.49 | 24000 | 2.7011 |
| 2.0689 | 3.57 | 24500 | 2.7009 |
| 2.0807 | 3.64 | 25000 | 2.6825 |
| 2.0639 | 3.71 | 25500 | 2.6744 |
| 2.0742 | 3.78 | 26000 | 2.6777 |
| 2.0789 | 3.86 | 26500 | 2.6689 |
| 2.0594 | 3.93 | 27000 | 2.6566 |
| 2.056 | 4.0 | 27500 | 2.6676 |
| 2.0223 | 4.08 | 28000 | 2.6711 |
| 2.0185 | 4.15 | 28500 | 2.6568 |
| 2.018 | 4.22 | 29000 | 2.6567 |
| 2.0036 | 4.29 | 29500 | 2.6545 |
| 2.0238 | 4.37 | 30000 | 2.6559 |
| 2.0091 | 4.44 | 30500 | 2.6450 |
| 2.0096 | 4.51 | 31000 | 2.6389 |
| 2.0083 | 4.58 | 31500 | 2.6401 |
| 2.0012 | 4.66 | 32000 | 2.6399 |
| 2.0166 | 4.73 | 32500 | 2.6289 |
| 1.9963 | 4.8 | 33000 | 2.6348 |
| 1.9943 | 4.88 | 33500 | 2.6240 |
| 2.0099 | 4.95 | 34000 | 2.6190 |
| 1.9895 | 5.02 | 34500 | 2.6308 |
| 1.9581 | 5.09 | 35000 | 2.6385 |
| 1.9502 | 5.17 | 35500 | 2.6237 |
| 1.9485 | 5.24 | 36000 | 2.6248 |
| 1.9643 | 5.31 | 36500 | 2.6279 |
| 1.9535 | 5.38 | 37000 | 2.6185 |
| 1.9575 | 5.46 | 37500 | 2.6146 |
| 1.9475 | 5.53 | 38000 | 2.6093 |
| 1.9434 | 5.6 | 38500 | 2.6090 |
| 1.954 | 5.68 | 39000 | 2.6027 |
| 1.9509 | 5.75 | 39500 | 2.6107 |
| 1.9454 | 5.82 | 40000 | 2.5980 |
| 1.9479 | 5.89 | 40500 | 2.6016 |
| 1.9539 | 5.97 | 41000 | 2.5971 |
| 1.9119 | 6.04 | 41500 | 2.6228 |
| 1.8974 | 6.11 | 42000 | 2.6169 |
| 1.9038 | 6.19 | 42500 | 2.6027 |
| 1.9008 | 6.26 | 43000 | 2.6027 |
| 1.9142 | 6.33 | 43500 | 2.6011 |
| 1.8783 | 6.4 | 44000 | 2.5960 |
| 1.8896 | 6.48 | 44500 | 2.6111 |
| 1.8975 | 6.55 | 45000 | 2.5889 |
| 1.9048 | 6.62 | 45500 | 2.6007 |
| 1.9049 | 6.69 | 46000 | 2.5972 |
| 1.8969 | 6.77 | 46500 | 2.6053 |
| 1.9105 | 6.84 | 47000 | 2.5893 |
| 1.8921 | 6.91 | 47500 | 2.5883 |
| 1.8918 | 6.99 | 48000 | 2.5792 |
| 1.8671 | 7.06 | 48500 | 2.6041 |
| 1.8551 | 7.13 | 49000 | 2.6070 |
| 1.8555 | 7.2 | 49500 | 2.6148 |
| 1.8543 | 7.28 | 50000 | 2.6077 |
| 1.8485 | 7.35 | 50500 | 2.6131 |
| 1.8474 | 7.42 | 51000 | 2.6039 |
| 1.8474 | 7.5 | 51500 | 2.5973 |
| 1.8442 | 7.57 | 52000 | 2.5946 |
| 1.8329 | 7.64 | 52500 | 2.6069 |
| 1.8551 | 7.71 | 53000 | 2.5923 |
| 1.8433 | 7.79 | 53500 | 2.5922 |
| 1.851 | 7.86 | 54000 | 2.5993 |
| 1.8313 | 7.93 | 54500 | 2.5960 |
| 1.8298 | 8.0 | 55000 | 2.6058 |
| 1.8159 | 8.08 | 55500 | 2.6286 |
| 1.817 | 8.15 | 56000 | 2.6348 |
| 1.8066 | 8.22 | 56500 | 2.6411 |
| 1.7935 | 8.3 | 57000 | 2.6338 |
| 1.809 | 8.37 | 57500 | 2.6290 |
| 1.812 | 8.44 | 58000 | 2.6258 |
| 1.79 | 8.51 | 58500 | 2.6321 |
| 1.8046 | 8.59 | 59000 | 2.6291 |
| 1.7975 | 8.66 | 59500 | 2.6283 |
| 1.7968 | 8.73 | 60000 | 2.6284 |
| 1.7779 | 8.81 | 60500 | 2.6257 |
| 1.7664 | 8.88 | 61000 | 2.6232 |
| 1.792 | 8.95 | 61500 | 2.6305 |
| 1.7725 | 9.02 | 62000 | 2.6525 |
| 1.7563 | 9.1 | 62500 | 2.6794 |
| 1.7606 | 9.17 | 63000 | 2.6784 |
| 1.7666 | 9.24 | 63500 | 2.6798 |
| 1.7551 | 9.31 | 64000 | 2.6813 |
| 1.7578 | 9.39 | 64500 | 2.6830 |
| 1.7483 | 9.46 | 65000 | 2.6833 |
| 1.7431 | 9.53 | 65500 | 2.6884 |
| 1.743 | 9.61 | 66000 | 2.6932 |
| 1.7395 | 9.68 | 66500 | 2.6927 |
| 1.7473 | 9.75 | 67000 | 2.6904 |
| 1.7413 | 9.82 | 67500 | 2.6892 |
| 1.7437 | 9.9 | 68000 | 2.6898 |
| 1.7546 | 9.97 | 68500 | 2.6894 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Supreeth/DeBERTa-Twitter-Emotion-Classification | 35ad6af4e1d828b0838d95589ac437c55f3f6bc0 | 2022-07-17T16:47:32.000Z | [
"pytorch",
"deberta",
"text-classification",
"transformers",
"license:mit"
] | text-classification | false | Supreeth | null | Supreeth/DeBERTa-Twitter-Emotion-Classification | 31 | null | transformers | 7,122 | ---
license: mit
---
# Label - Emotion Table
| Emotion | LABEL |
| -------------- |:-------------: |
| Anger | LABEL_0 |
| Boredom | LABEL_1 |
| Empty | LABEL_2 |
| Enthusiasm | LABEL_3 |
| Fear | LABEL_4 |
| Fun | LABEL_5 |
| Happiness | LABEL_6 |
| Hate | LABEL_7 |
| Joy | LABEL_8 |
| Love | LABEL_9 |
| Neutral | LABEL_10 |
| Relief | LABEL_11 |
| Sadness | LABEL_12 |
| Surprise | LABEL_13 |
| Worry | LABEL_14 |
|
anahitapld/dbd_t5 | 06bac9dd72a82134449b4f5b0d9f497e558e631c | 2022-07-18T07:47:17.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | anahitapld | null | anahitapld/dbd_t5 | 31 | null | transformers | 7,123 | ---
license: apache-2.0
---
|
tahercoolguy/nllb-8bit-600 | 82f292f223ff1439b71fea14773d6655281b4206 | 2022-07-23T11:34:42.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | tahercoolguy | null | tahercoolguy/nllb-8bit-600 | 31 | null | transformers | 7,124 | ---
license: apache-2.0
---
|
HMHMlee/BioLinkBERT-base-finetuned-ner | 4469ece8896907001be8f15b78ccb7d295bd033f | 2022-07-26T08:05:20.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | HMHMlee | null | HMHMlee/BioLinkBERT-base-finetuned-ner | 31 | 1 | transformers | 7,125 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: BioLinkBERT-base-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioLinkBERT-base-finetuned-ner
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1226
- Precision: 0.8760
- Recall: 0.9185
- F1: 0.8968
- Accuracy: 0.9647
## Model description
This model is designed to perform NER function for specific text using BioLink BERT
## Intended uses & limitations
The goal was to have a drug tag printed immediately for a particular sentence, but it has the disadvantage of being marked as LABEL
LABEL0 : irrelevant text
LABEL1,2 : Drug
LABEL3,4 : condition
## Training and evaluation data
More information needed
## Training procedure
Reference Code: SciBERT Fine-Tuning on Drug/ADE Corpus (https://github.com/jsylee/personal-projects/blob/master/Hugging%20Face%20ADR%20Fine-Tuning/SciBERT%20ADR%20Fine-Tuning.ipynb)
## How to use
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("HMHMlee/BioLinkBERT-base-finetuned-ner")
model = AutoModelForTokenClassification.from_pretrained("HMHMlee/BioLinkBERT-base-finetuned-ner")
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1099 | 1.0 | 201 | 0.1489 | 0.8415 | 0.9032 | 0.8713 | 0.9566 |
| 0.1716 | 2.0 | 402 | 0.1318 | 0.8456 | 0.9135 | 0.8782 | 0.9597 |
| 0.1068 | 3.0 | 603 | 0.1197 | 0.8682 | 0.9110 | 0.8891 | 0.9641 |
| 0.0161 | 4.0 | 804 | 0.1219 | 0.8694 | 0.9157 | 0.8919 | 0.9639 |
| 0.1499 | 5.0 | 1005 | 0.1226 | 0.8760 | 0.9185 | 0.8968 | 0.9647 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Perselope/thesis-audio-1 | 380fa4c32e09b4bbc6a8c8120e1e18423bca71de | 2022-07-28T13:27:40.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Perselope | null | Perselope/thesis-audio-1 | 31 | null | transformers | 7,126 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: thesis-audio-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# thesis-audio-1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4268
- Wer: 0.3395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4633 | 4.0 | 500 | 1.4892 | 1.0006 |
| 0.5377 | 8.0 | 1000 | 0.4046 | 0.4163 |
| 0.1818 | 12.0 | 1500 | 0.4255 | 0.3850 |
| 0.1024 | 16.0 | 2000 | 0.4574 | 0.3644 |
| 0.0723 | 20.0 | 2500 | 0.4412 | 0.3550 |
| 0.0542 | 24.0 | 3000 | 0.4095 | 0.3404 |
| 0.0434 | 28.0 | 3500 | 0.4268 | 0.3395 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Aleksandra/herbert-base-cased-finetuned-squad | 4cbf8e1987f9367451c884520c75022619d2111a | 2022-01-20T13:14:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Aleksandra | null | Aleksandra/herbert-base-cased-finetuned-squad | 30 | null | transformers | 7,127 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: herbert-base-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# herbert-base-cased-finetuned-squad
This model is a fine-tuned version of [allegro/herbert-base-cased](https://huggingface.co/allegro/herbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2071
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 233 | 1.2474 |
| No log | 2.0 | 466 | 1.1951 |
| 1.3459 | 3.0 | 699 | 1.2071 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth | 25af6aa7dc6a787aa0525be1604aa0ae45e2a9cf | 2021-09-14T14:29:52.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth | 30 | 2 | transformers | 7,128 | ---
language:
- ar
license: apache-2.0
widget:
- text: "الهدف من الحياة هو [MASK] ."
---
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
This model card describes **CAMeLBERT-MSA-eighth** (`bert-base-arabic-camelbert-msa-eighth`), a model pre-trained on an eighth of the full MSA dataset.
||Model|Variant|Size|#Word|
|-|-|:-:|-:|-:|
||`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
||`bert-base-arabic-camelbert-ca`|CA|6GB|847M|
||`bert-base-arabic-camelbert-da`|DA|54GB|5.8B|
||`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B|
||`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B|
||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B|
|✔|`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B|
||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M|
## Intended uses
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT).
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth')
>>> unmasker("الهدف من الحياة هو [MASK] .")
[{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
'score': 0.057812128216028214,
'token': 3696,
'token_str': 'الحياة'},
{'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]',
'score': 0.05573025345802307,
'token': 6232,
'token_str': 'النجاح'},
{'sequence': '[CLS] الهدف من الحياة هو الكمال. [SEP]',
'score': 0.035942986607551575,
'token': 17188,
'token_str': 'الكمال'},
{'sequence': '[CLS] الهدف من الحياة هو التعلم. [SEP]',
'score': 0.03375256434082985,
'token': 12554,
'token_str': 'التعلم'},
{'sequence': '[CLS] الهدف من الحياة هو العمل. [SEP]',
'score': 0.030303971841931343,
'token': 2854,
'token_str': 'العمل'}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth')
model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth')
model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
- MSA (Modern Standard Arabic)
- [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11)
- [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus)
- [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian)
- [Arabic Wikipedia](https://archive.org/details/arwiki-20190201)
- The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/)
## Training procedure
We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
- After extracting the raw text from each corpus, we apply the following pre-processing.
- We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297).
- We also remove lines without any Arabic characters.
- We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools).
- Finally, we split each line into sentences with a heuristics-based sentence segmenter.
- We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers).
- We do not lowercase letters nor strip accents.
### Pre-training
- The model was trained on a single cloud TPU (`v3-8`) for one million steps in total.
- The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
- The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
- We use whole word masking and a duplicate factor of 10.
- We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
- We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
- The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
- We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
- We fine-tune and evaluate the models using 12 dataset.
- We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
- We used transformers `v3.1.0` along with PyTorch `v1.5.1`.
- The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
- We use \\(F_{1}\\) score as a metric for all tasks.
- Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT).
### Results
| Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% |
| POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% |
| | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% |
| | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% |
| SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
| | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% |
| | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% |
| DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
| | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% |
| | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% |
| | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% |
| Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
### Results (Average)
| | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% |
| | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% |
| | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
| Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% |
<a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant.
## Acknowledgements
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
|
Capreolus/electra-base-msmarco | a973d2976540da0777ebea375173a4e2a6a540db | 2020-09-08T14:53:10.000Z | [
"pytorch",
"tf",
"electra",
"text-classification",
"arxiv:2008.09093",
"transformers"
] | text-classification | false | Capreolus | null | Capreolus/electra-base-msmarco | 30 | null | transformers | 7,129 | # capreolus/electra-base-msmarco
## Model description
ELECTRA-Base model (`google/electra-base-discriminator`) fine-tuned on the MS MARCO passage classification task. It is intended to be used as a `ForSequenceClassification` model, but requires some modification since it contains a BERT classification head rather than the standard ELECTRA classification head. See the [TFElectraRelevanceHead](https://github.com/capreolus-ir/capreolus/blob/master/capreolus/reranker/TFBERTMaxP.py) in the Capreolus BERT-MaxP implementation for a usage example.
This corresponds to the ELECTRA-Base model used to initialize PARADE (ELECTRA) in [PARADE: Passage Representation Aggregation for Document Reranking](https://arxiv.org/abs/2008.09093) by Li et al. It was converted from the released [TFv1 checkpoint](https://zenodo.org/record/3974431/files/vanilla_electra_base_on_MSMARCO.tar.gz). Please cite the PARADE paper if you use these weights.
|
FuriouslyAsleep/markuplm-large-finetuned-qa | ed8e8dd012ad26dcfac4b7edbf8b192d5b0e5e1d | 2022-02-10T20:30:55.000Z | [
"pytorch",
"markuplm",
"question-answering",
"arxiv:2110.08518",
"transformers",
"autotrain_compatible"
] | question-answering | false | FuriouslyAsleep | null | FuriouslyAsleep/markuplm-large-finetuned-qa | 30 | null | transformers | 7,130 | # MarkupLM Large fine-tuned on WebSRC to allow Question Answering.
This model is adapted from Microsoft's MarkupLM. This fine-tuned model is the result of partially following instructions in the MarkupLM git repo (with adjustments described farther below under the Fine-tuning args section.) This version not endorsed by Microsoft.
Test the question answering out in the [Markup QA space here](https://huggingface.co/spaces/FuriouslyAsleep/markupQAdemo)
\---------------------------------------------------------------------------------
**Fine-tuned Multimodal (text +markup language) pre-training for [Document AI](https://www.microsoft.com/en-us/research/project/document-ai/)**
## Introduction (From Microsoft MarkupLM Large Model Card)
MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:
[MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei
\---------------------------------------------------------------------------------
Fine-tuning args:
--per_gpu_train_batch_size 4 --warmup_ratio 0.1 --num_train_epochs 4
## Training was performed on only a small subset of the WebSRC:
\
The number of total websites is 60
The train websites list is ['ga09']
The test websites list is []
The dev websites list is ['ga12', 'ph04', 'au08', 'ga10', 'au01', 'bo17', 'mo02', 'jo11', 'sp09', 'sp10', 'ph03', 'ph01', 'un09', 'sp14', 'jo03', 'sp07', 'un07', 'bo07', 'mo04', 'bo09', 'jo10', 'un12', 're02', 'bo01', 'ca01', 'sp15', 'au12', 'un03', 're03', 'jo13', 'ph02', 'un10', 'au09', 'au10', 'un02', 'mo07', 'sp13', 'bo08', 'sp03', 're05', 'sp06', 'ca02', 'sp02', 'sp01', 'au03', 'sp11', 'mo06', 'bo10', 'un11', 'un06', 'ga01', 'un04', 'ph05', 'au11', 'sp12', 'jo05', 'sp04', 'jo12', 'sp08']
The number of processed websites is 60
\---------------------------------------------------------------------------------
Inference test here may not work. Use the transformers markuplm branch from [NielsRogge transformers markuplm branch](https://github.com/NielsRogge/transformers/tree/modeling_markuplm)
After installing from there, try the following model and tokenizer assignemnts (consider using a file for the tags dict)
model = MarkupLMForQuestionAnswering.from_pretrained("FuriouslyAsleep/markuplm-large-finetuned-qa")
tokenizer = MarkupLMTokenizer(
vocab_file="vocab.json",
merges_file="merges.txt",
tags_dict= {"a": 0, "abbr": 1, "acronym": 2, "address": 3, "altGlyph": 4, "altGlyphDef": 5, "altGlyphItem": 6, "animate": 7, "animateColor": 8, "animateMotion": 9, "animateTransform": 10, "applet": 11, "area": 12, "article": 13, "aside": 14, "audio": 15, "b": 16, "base": 17, "basefont": 18, "bdi": 19, "bdo": 20, "bgsound": 21, "big": 22, "blink": 23, "blockquote": 24, "body": 25, "br": 26, "button": 27, "canvas": 28, "caption": 29, "center": 30, "circle": 31, "cite": 32, "clipPath": 33, "code": 34, "col": 35, "colgroup": 36, "color-profile": 37, "content": 38, "cursor": 39, "data": 40, "datalist": 41, "dd": 42, "defs": 43, "del": 44, "desc": 45, "details": 46, "dfn": 47, "dialog": 48, "dir": 49, "div": 50, "dl": 51, "dt": 52, "ellipse": 53, "em": 54, "embed": 55, "feBlend": 56, "feColorMatrix": 57, "feComponentTransfer": 58, "feComposite": 59, "feConvolveMatrix": 60, "feDiffuseLighting": 61, "feDisplacementMap": 62, "feDistantLight": 63, "feFlood": 64, "feFuncA": 65, "feFuncB": 66, "feFuncG": 67, "feFuncR": 68, "feGaussianBlur": 69, "feImage": 70, "feMerge": 71, "feMergeNode": 72, "feMorphology": 73, "feOffset": 74, "fePointLight": 75, "feSpecularLighting": 76, "feSpotLight": 77, "feTile": 78, "feTurbulence": 79, "fieldset": 80, "figcaption": 81, "figure": 82, "filter": 83, "font-face-format": 84, "font-face-name": 85, "font-face-src": 86, "font-face-uri": 87, "font-face": 88, "font": 89, "footer": 90, "foreignObject": 91, "form": 92, "frame": 93, "frameset": 94, "g": 95, "glyph": 96, "glyphRef": 97, "h1": 98, "h2": 99, "h3": 100, "h4": 101, "h5": 102, "h6": 103, "head": 104, "header": 105, "hgroup": 106, "hkern": 107, "hr": 108, "html": 109, "i": 110, "iframe": 111, "image": 112, "img": 113, "input": 114, "ins": 115, "kbd": 116, "keygen": 117, "label": 118, "legend": 119, "li": 120, "line": 121, "linearGradient": 122, "link": 123, "main": 124, "map": 125, "mark": 126, "marker": 127, "marquee": 128, "mask": 129, "math": 130, "menu": 131, "menuitem": 132, "meta": 133, "metadata": 134, "meter": 135, "missing-glyph": 136, "mpath": 137, "nav": 138, "nobr": 139, "noembed": 140, "noframes": 141, "noscript": 142, "object": 143, "ol": 144, "optgroup": 145, "option": 146, "output": 147, "p": 148, "param": 149, "path": 150, "pattern": 151, "picture": 152, "plaintext": 153, "polygon": 154, "polyline": 155, "portal": 156, "pre": 157, "progress": 158, "q": 159, "radialGradient": 160, "rb": 161, "rect": 162, "rp": 163, "rt": 164, "rtc": 165, "ruby": 166, "s": 167, "samp": 168, "script": 169, "section": 170, "select": 171, "set": 172, "shadow": 173, "slot": 174, "small": 175, "source": 176, "spacer": 177, "span": 178, "stop": 179, "strike": 180, "strong": 181, "style": 182, "sub": 183, "summary": 184, "sup": 185, "svg": 186, "switch": 187, "symbol": 188, "table": 189, "tbody": 190, "td": 191, "template": 192, "text": 193, "textPath": 194, "textarea": 195, "tfoot": 196, "th": 197, "thead": 198, "time": 199, "title": 200, "tr": 201, "track": 202, "tref": 203, "tspan": 204, "tt": 205, "u": 206, "ul": 207, "use": 208, "var": 209, "video": 210, "view": 211, "vkern": 212, "wbr": 213, "xmp": 214},
add_prefix_space=True,)
Go to [https://github.com/uwts/ProjectRisk](https://github.com/uwts/ProjectRisk) for sample script. |
Helsinki-NLP/opus-mt-af-fr | d01086028ee2e84b2a9f1517945d1b651bd08acc | 2021-09-09T21:26:04.000Z | [
"pytorch",
"marian",
"text2text-generation",
"af",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-af-fr | 30 | null | transformers | 7,131 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-af-fr
* source languages: af
* target languages: fr
* OPUS readme: [af-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.af.fr | 35.3 | 0.543 |
|
Helsinki-NLP/opus-mt-bcl-en | 348eac648d5c855db90888990e4033305139c72a | 2021-09-09T21:26:44.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bcl",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-bcl-en | 30 | null | transformers | 7,132 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-bcl-en
* source languages: bcl
* target languages: en
* OPUS readme: [bcl-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-11.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-en/opus-2020-02-11.zip)
* test set translations: [opus-2020-02-11.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-en/opus-2020-02-11.test.txt)
* test set scores: [opus-2020-02-11.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-en/opus-2020-02-11.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bcl.en | 56.1 | 0.697 |
|
Helsinki-NLP/opus-mt-efi-en | 0bf437954f943da3d49a172b6f91aa7157c3525a | 2021-09-09T21:33:32.000Z | [
"pytorch",
"marian",
"text2text-generation",
"efi",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-efi-en | 30 | null | transformers | 7,133 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-efi-en
* source languages: efi
* target languages: en
* OPUS readme: [efi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/efi-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/efi-en/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-en/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-en/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.efi.en | 35.4 | 0.510 |
|
Helsinki-NLP/opus-mt-en-itc | 74d38d3b83efcdfbefc27b958bae6f36760a8698 | 2021-01-18T08:10:20.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"it",
"ca",
"rm",
"es",
"ro",
"gl",
"sc",
"co",
"wa",
"pt",
"oc",
"an",
"id",
"fr",
"ht",
"itc",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-itc | 30 | 1 | transformers | 7,134 | ---
language:
- en
- it
- ca
- rm
- es
- ro
- gl
- sc
- co
- wa
- pt
- oc
- an
- id
- fr
- ht
- itc
tags:
- translation
license: apache-2.0
---
### eng-itc
* source group: English
* target group: Italic languages
* OPUS readme: [eng-itc](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-itc/README.md)
* model: transformer
* source language(s): eng
* target language(s): arg ast cat cos egl ext fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lat_Latn lij lld_Latn lmo max_Latn mfe min mwl oci pap pms por roh ron scn spa tmw_Latn vec wln zlm_Latn zsm_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-enro-engron.eng.ron | 27.1 | 0.565 |
| newsdiscussdev2015-enfr-engfra.eng.fra | 29.9 | 0.574 |
| newsdiscusstest2015-enfr-engfra.eng.fra | 35.3 | 0.609 |
| newssyscomb2009-engfra.eng.fra | 27.7 | 0.567 |
| newssyscomb2009-engita.eng.ita | 28.6 | 0.586 |
| newssyscomb2009-engspa.eng.spa | 29.8 | 0.569 |
| news-test2008-engfra.eng.fra | 25.0 | 0.536 |
| news-test2008-engspa.eng.spa | 27.1 | 0.548 |
| newstest2009-engfra.eng.fra | 26.7 | 0.557 |
| newstest2009-engita.eng.ita | 28.9 | 0.583 |
| newstest2009-engspa.eng.spa | 28.9 | 0.567 |
| newstest2010-engfra.eng.fra | 29.6 | 0.574 |
| newstest2010-engspa.eng.spa | 33.8 | 0.598 |
| newstest2011-engfra.eng.fra | 30.9 | 0.590 |
| newstest2011-engspa.eng.spa | 34.8 | 0.598 |
| newstest2012-engfra.eng.fra | 29.1 | 0.574 |
| newstest2012-engspa.eng.spa | 34.9 | 0.600 |
| newstest2013-engfra.eng.fra | 30.1 | 0.567 |
| newstest2013-engspa.eng.spa | 31.8 | 0.576 |
| newstest2016-enro-engron.eng.ron | 25.9 | 0.548 |
| Tatoeba-test.eng-arg.eng.arg | 1.6 | 0.120 |
| Tatoeba-test.eng-ast.eng.ast | 17.2 | 0.389 |
| Tatoeba-test.eng-cat.eng.cat | 47.6 | 0.668 |
| Tatoeba-test.eng-cos.eng.cos | 4.3 | 0.287 |
| Tatoeba-test.eng-egl.eng.egl | 0.9 | 0.101 |
| Tatoeba-test.eng-ext.eng.ext | 8.7 | 0.287 |
| Tatoeba-test.eng-fra.eng.fra | 44.9 | 0.635 |
| Tatoeba-test.eng-frm.eng.frm | 1.0 | 0.225 |
| Tatoeba-test.eng-gcf.eng.gcf | 0.7 | 0.115 |
| Tatoeba-test.eng-glg.eng.glg | 44.9 | 0.648 |
| Tatoeba-test.eng-hat.eng.hat | 30.9 | 0.533 |
| Tatoeba-test.eng-ita.eng.ita | 45.4 | 0.673 |
| Tatoeba-test.eng-lad.eng.lad | 5.6 | 0.279 |
| Tatoeba-test.eng-lat.eng.lat | 12.1 | 0.380 |
| Tatoeba-test.eng-lij.eng.lij | 1.4 | 0.183 |
| Tatoeba-test.eng-lld.eng.lld | 0.5 | 0.199 |
| Tatoeba-test.eng-lmo.eng.lmo | 0.7 | 0.187 |
| Tatoeba-test.eng-mfe.eng.mfe | 83.6 | 0.909 |
| Tatoeba-test.eng-msa.eng.msa | 31.3 | 0.549 |
| Tatoeba-test.eng.multi | 38.0 | 0.588 |
| Tatoeba-test.eng-mwl.eng.mwl | 2.7 | 0.322 |
| Tatoeba-test.eng-oci.eng.oci | 8.2 | 0.293 |
| Tatoeba-test.eng-pap.eng.pap | 46.7 | 0.663 |
| Tatoeba-test.eng-pms.eng.pms | 2.1 | 0.194 |
| Tatoeba-test.eng-por.eng.por | 41.2 | 0.635 |
| Tatoeba-test.eng-roh.eng.roh | 2.6 | 0.237 |
| Tatoeba-test.eng-ron.eng.ron | 40.6 | 0.632 |
| Tatoeba-test.eng-scn.eng.scn | 1.6 | 0.181 |
| Tatoeba-test.eng-spa.eng.spa | 49.5 | 0.685 |
| Tatoeba-test.eng-vec.eng.vec | 1.6 | 0.223 |
| Tatoeba-test.eng-wln.eng.wln | 7.1 | 0.250 |
### System Info:
- hf_name: eng-itc
- source_languages: eng
- target_languages: itc
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-itc/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc']
- src_constituents: {'eng'}
- tgt_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat_Latn', 'lad_Latn', 'pcd', 'lat_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'srd', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: itc
- short_pair: en-itc
- chrF2_score: 0.588
- bleu: 38.0
- brevity_penalty: 0.9670000000000001
- ref_len: 73951.0
- src_name: English
- tgt_name: Italic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: itc
- prefer_old: False
- long_pair: eng-itc
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-fi-fr | c936f809f49131ec06fe13b1045eeeb455ccf104 | 2021-09-09T21:47:40.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-fr | 30 | null | transformers | 7,135 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-fr
* source languages: fi
* target languages: fr
* OPUS readme: [fi-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-fr/opus-2020-02-26.zip)
* test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-fr/opus-2020-02-26.test.txt)
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-fr/opus-2020-02-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fi.fr | 50.7 | 0.670 |
|
Helsinki-NLP/opus-mt-no-de | 19e8bdf5de2e254ae25605c0afd0bb68d8bdd6ee | 2020-08-21T14:42:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"no",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-no-de | 30 | null | transformers | 7,136 | ---
language:
- no
- de
tags:
- translation
license: apache-2.0
---
### nor-deu
* source group: Norwegian
* target group: German
* OPUS readme: [nor-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-deu/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.deu | 29.6 | 0.541 |
### System Info:
- hf_name: nor-deu
- source_languages: nor
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'de']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: deu
- short_pair: no-de
- chrF2_score: 0.541
- bleu: 29.6
- brevity_penalty: 0.96
- ref_len: 34575.0
- src_name: Norwegian
- tgt_name: German
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: de
- prefer_old: False
- long_pair: nor-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-sg-en | 227bc46ddfc78d4e0caa4b4b5fed91e0db8a0ab0 | 2021-09-10T14:03:02.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sg",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sg-en | 30 | null | transformers | 7,137 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sg-en
* source languages: sg
* target languages: en
* OPUS readme: [sg-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sg-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sg-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sg.en | 32.0 | 0.477 |
|
KoichiYasuoka/roberta-classical-chinese-large-upos | 8a3afed02fb70e16f9026b55c30e786074c7ac0a | 2022-07-05T22:11:02.000Z | [
"pytorch",
"roberta",
"token-classification",
"lzh",
"dataset:universal_dependencies",
"transformers",
"classical chinese",
"literary chinese",
"ancient chinese",
"pos",
"dependency-parsing",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/roberta-classical-chinese-large-upos | 30 | null | transformers | 7,138 | ---
language:
- "lzh"
tags:
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "token-classification"
widget:
- text: "子曰學而時習之不亦説乎有朋自遠方來不亦樂乎人不知而不慍不亦君子乎"
---
# roberta-classical-chinese-large-upos
## Model Description
This is a RoBERTa model pre-trained on Classical Chinese texts for POS-tagging and dependency-parsing, derived from [roberta-classical-chinese-large-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-char). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-classical-chinese-large-upos")
```
## Reference
Koichi Yasuoka: [Universal Dependencies Treebank of the Four Books in Classical Chinese](http://hdl.handle.net/2433/245217), DADH2019: 10th International Conference of Digital Archives and Digital Humanities (December 2019), pp.20-28.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
NTUYG/SOTitle-Gen-T5 | 6837a37162f274cce6fb79e5580e0938f58a8871 | 2021-09-10T09:51:34.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | NTUYG | null | NTUYG/SOTitle-Gen-T5 | 30 | null | transformers | 7,139 | Entry not found |
Tsubasaz/clinical-pubmed-bert-base-512 | e5a643404ed6bb0e992200a27d07c83a65558b60 | 2022-05-06T10:55:40.000Z | [
"pytorch",
"bert",
"fill-mask",
"en",
"dataset:MIMIC-III",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | Tsubasaz | null | Tsubasaz/clinical-pubmed-bert-base-512 | 30 | 2 | transformers | 7,140 | ---
language:
- en
license: mit
datasets:
- MIMIC-III
widget:
- text: "Due to shortness of breath, the patient is diagnosed with [MASK], and other respiratory problems."
example_title: "Example 1"
- text: "Due to high blood sugar, and very low blood pressure, the patient is diagnosed with [MASK]."
example_title: "Example 2"
---
# ClinicalPubMedBERT
## Description
A pre-trained model for clinical decision support, for more details, please see https://github.com/NtaylorOX/Public_Prompt_Mimic_III
A BERT model pre-trained on PubMed abstracts, and continual pre-trained on clinical notes ([MIMIC-III](https://mimic.physionet.org/)). We try combining two domains that have fewer overlaps with general knowledge text corpora: EHRs and biomedical papers. We hope this model can serve better results on clinical-related downstream tasks such as readmissions.
This model is trained on 500000 clinical notes randomly sampled from MIMIC datasets, with 100k steps of training. We also used whole word masking to enhance the coherence of the language model. All notes are chunked into a length of 512 tokens.
Pre-trained model: https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract |
VoVanPhuc/vietnamese-summarization | a3b4a80e9f148acc8660ecacd2009a854e77be3b | 2021-09-13T03:54:32.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | VoVanPhuc | null | VoVanPhuc/vietnamese-summarization | 30 | null | transformers | 7,141 | Entry not found |
aware-ai/longformer-squadv2 | 74f1fe8292734e4072a24e479cd0482861c27a71 | 2020-08-07T11:30:59.000Z | [
"pytorch",
"tf",
"longformer",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aware-ai | null | aware-ai/longformer-squadv2 | 30 | null | transformers | 7,142 | Entry not found |
addy88/hindi-wav2vec2-stt | d003bf31d176c67200cd6ca315c5e57ef1bb65a6 | 2021-12-09T03:55:47.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | addy88 | null | addy88/hindi-wav2vec2-stt | 30 | null | transformers | 7,143 | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/hindi-wav2vec2-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/hindi-wav2vec2-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
airesearch/xlm-roberta-base-finetune-qa | fb4f5550052f921cdbb04562b8cdf7d62cd00310 | 2021-07-14T07:13:00.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | airesearch | null | airesearch/xlm-roberta-base-finetune-qa | 30 | null | transformers | 7,144 | ---
widget:
- text: "สวนกุหลาบเป็นโรงเรียนอะไร"
context: "โรงเรียนสวนกุหลาบวิทยาลัย (Suankularb Wittayalai School) (อักษรย่อ : ส.ก. / S.K.) เป็นโรงเรียนชายล้วน ระดับชั้นมัธยมศึกษาขนาดใหญ่พิเศษ สังกัดสำนักงานเขตพื้นที่การศึกษามัธยมศึกษาเขต 1 สำนักงานคณะกรรมการการศึกษาขั้นพื้นฐาน (ชื่อเดิม: กรมสามัญศึกษา) กระทรวงศึกษาธิการ ก่อตั้งโดย พระบาทสมเด็จพระจุลจอมเกล้าเจ้าอยู่หัว ได้รับการสถาปนาขึ้นในวันที่ 8 มีนาคม พ.ศ. 2424 (ขณะนั้นนับวันที่ 1 เมษายน เป็นวันขึ้นปีใหม่ เมื่อนับอย่างสากลถือเป็น พ.ศ. 2425) โดยเป็นโรงเรียนรัฐบาลแห่งแรกของประเทศไทย"
---
# xlm-roberta-base-finetune-qa
Finetuning `xlm-roberta-base` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`.
Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py).
Train with:
```
export WANDB_PROJECT=wangchanberta-qa
export MODEL_NAME=xlm-roberta-base
python train_question_answering_lm_finetuning.py \
--model_name $MODEL_NAME \
--dataset_name chimera_qa \
--output_dir $MODEL_NAME-finetune-chimera_qa-model \
--log_dir $MODEL_NAME-finetune-chimera_qa-log \
--pad_on_right \
--fp16
```
|
archmagos/HourAI | ba46d38528287c054cc895ad4baf42b25d83978a | 2022-05-03T20:09:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | archmagos | null | archmagos/HourAI | 30 | null | transformers | 7,145 | ---
tags:
- conversational
---
#HourAI bot based on DialoGPT |
benjamin/roberta-base-wechsel-french | 5608715c8a81314f3fb2ac0462ccc6d149e16c9f | 2022-07-13T23:44:38.000Z | [
"pytorch",
"roberta",
"fill-mask",
"fr",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | benjamin | null | benjamin/roberta-base-wechsel-french | 30 | 1 | transformers | 7,146 | ---
language: fr
license: mit
---
# roberta-base-wechsel-french
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
boychaboy/SNLI_roberta-large | d6ec1fa4829a98fd07ff06f9aa6422f067f0026b | 2021-05-20T14:37:47.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | boychaboy | null | boychaboy/SNLI_roberta-large | 30 | null | transformers | 7,147 | Entry not found |
chinhon/bart-large-chinese-cnhdwriter | cca6399ef69fcfa2b06525924d0e999896d71c56 | 2022-01-22T06:01:33.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | chinhon | null | chinhon/bart-large-chinese-cnhdwriter | 30 | 1 | transformers | 7,148 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-chinese-cnhdwriter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-chinese-cnhdwriter
This model is a fine-tuned version of [fnlp/bart-large-chinese](https://huggingface.co/fnlp/bart-large-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3859
- Rouge1: 16.8496
- Rouge2: 2.5548
- Rougel: 16.8123
- Rougelsum: 16.8056
- Gen Len: 18.9357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 1.2119 | 1.0 | 62716 | 1.1876 | 15.3858 | 2.1251 | 15.3709 | 15.3705 | 18.7269 |
| 1.0847 | 2.0 | 125432 | 1.3353 | 13.7743 | 1.9047 | 13.7664 | 13.7421 | 18.6183 |
| 0.6995 | 3.0 | 188148 | 1.2209 | 16.6797 | 2.3979 | 16.6258 | 16.6368 | 18.8953 |
| 0.4819 | 4.0 | 250864 | 1.3859 | 16.8496 | 2.5548 | 16.8123 | 16.8056 | 18.9357 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
classla/sloberta-frenk-hate | 99fd4dee852c2a9f9a176d7919c7a11d63251e18 | 2021-11-30T12:42:46.000Z | [
"pytorch",
"camembert",
"text-classification",
"sl",
"arxiv:1907.11692",
"arxiv:1906.02045",
"transformers",
"hate-speech"
] | text-classification | false | classla | null | classla/sloberta-frenk-hate | 30 | null | transformers | 7,149 | ---
language: "sl"
tags:
- text-classification
- hate-speech
widget:
- text: "Silva, ti si grda in neprijazna"
---
Text classification model based on `EMBEDDIA/sloberta` and fine-tuned on the [FRENK dataset](https://www.clarin.si/repository/xmlui/handle/11356/1433) comprising of LGBT and migrant hatespeech. Only the slovenian subset of the data was used for fine-tuning and the dataset has been relabeled for binary classification (offensive or acceptable).
## Fine-tuning hyperparameters
Fine-tuning was performed with `simpletransformers`. Beforehand a brief hyperparameter optimisation was performed and the presumed optimal hyperparameters are:
```python
model_args = {
"num_train_epochs": 14,
"learning_rate": 1e-5,
"train_batch_size": 21,
}
```
## Performance
The same pipeline was run with two other transformer models and `fasttext` for comparison. Accuracy and macro F1 score were recorded for each of the 6 fine-tuning sessions and post festum analyzed.
| model | average accuracy | average macro F1|
|---|---|---|
|sloberta-frenk-hate|0.7785|0.7764|
|EMBEDDIA/crosloengual-bert |0.7616|0.7585|
|xlm-roberta-base |0.686|0.6827|
|fasttext|0.709 |0.701 |
From recorded accuracies and macro F1 scores p-values were also calculated:
Comparison with `crosloengual-bert`:
| test | accuracy p-value | macro F1 p-value|
| --- | --- | --- |
|Wilcoxon|0.00781|0.00781|
|Mann Whithney U test|0.00163|0.00108|
|Student t-test |0.000101|3.95e-05|
Comparison with `xlm-roberta-base`:
| test | accuracy p-value | macro F1 p-value|
| --- | --- | --- |
|Wilcoxon|0.00781|0.00781|
|Mann Whithney U test|0.00108|0.00108|
|Student t-test |9.46e-11|6.94e-11|
## Use examples
```python
from simpletransformers.classification import ClassificationModel
model_args = {
"num_train_epochs": 6,
"learning_rate": 3e-6,
"train_batch_size": 69}
model = ClassificationModel(
"camembert", "5roop/sloberta-frenk-hate", use_cuda=True,
args=model_args
)
predictions, logit_output = model.predict(["Silva, ti si grda in neprijazna", "Naša hiša ima dimnik"])
predictions
### Output:
### array([1, 0])
```
## Citation
If you use the model, please cite the following paper on which the original model is based:
```
@article{DBLP:journals/corr/abs-1907-11692,
author = {Yinhan Liu and
Myle Ott and
Naman Goyal and
Jingfei Du and
Mandar Joshi and
Danqi Chen and
Omer Levy and
Mike Lewis and
Luke Zettlemoyer and
Veselin Stoyanov},
title = {RoBERTa: {A} Robustly Optimized {BERT} Pretraining Approach},
journal = {CoRR},
volume = {abs/1907.11692},
year = {2019},
url = {http://arxiv.org/abs/1907.11692},
archivePrefix = {arXiv},
eprint = {1907.11692},
timestamp = {Thu, 01 Aug 2019 08:59:33 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1907-11692.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
and the dataset used for fine-tuning:
```
@misc{ljubešić2019frenk,
title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English},
author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec},
year={2019},
eprint={1906.02045},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/1906.02045}
}
``` |
creat89/NER_FEDA_Bg | 83305a32394b78190b385757bf26cb744cdc43cf | 2022-04-13T09:26:23.000Z | [
"pytorch",
"bert",
"multilingual",
"bg",
"mk",
"transformers",
"labse",
"ner",
"license:mit"
] | null | false | creat89 | null | creat89/NER_FEDA_Bg | 30 | null | transformers | 7,150 | ---
license: mit
language:
- multilingual
- bg
- mk
tags:
- labse
- ner
---
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats:
1. Wikiann (LOC, PER, ORG)
2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO)
7. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME)
PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date, GEOPOLIT: Geopolitical,
You can select the tagset to use in the output by configuring the model. This models manages differently uppercase words.
More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA). |
emil2000/dialogpt-for-french-language | 183ec8ed380bebf8fc6142477db0f633dc88ade7 | 2021-09-25T21:50:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"fr",
"transformers"
] | text-generation | false | emil2000 | null | emil2000/dialogpt-for-french-language | 30 | null | transformers | 7,151 | ---
language:
- fr
tags:
- {fr}
- {gpt2}
---
This model aims at being a french conversational agent. This consists of a fine-tuning of Dialo-GPT for french language. The dataset used gathers 36k conversations extracted from books, movies, interviews and dialogues for learning french.
More details about the model can be found [there](https://github.com/emil2000dza/DialoGPT-fine-tuned-for-french-language)
|
ffsouza/tiny-mbart-length-128-finetuned-en-to-ro | d87878b027afe7a1ebecde17f4e674288220e564 | 2021-11-30T06:12:22.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ffsouza | null | ffsouza/tiny-mbart-length-128-finetuned-en-to-ro | 30 | null | transformers | 7,152 | Entry not found |
flax-community/bengali-t5-base | e27fdf4c9c55d7c6e12df9ee4209eb2e3c1cd4ba | 2021-07-19T06:27:44.000Z | [
"pytorch",
"jax",
"tensorboard",
"mt5",
"text2text-generation",
"arxiv:1910.10683",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | flax-community | null | flax-community/bengali-t5-base | 30 | null | transformers | 7,153 | # bengali-t5-base
**bengali-t5-base** is a model trained on the Bengali portion of MT5 dataset. We used the `T5-base` model for this model.
[Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
The model is trained on around ~11B tokens (64 size batch, 512 tokens, 350k steps).
## load tokenizer
```
>>> tokenizer = transformers.AutoTokenizer.from_pretrained("flax-community/bengali-t5-base")
>>> tokenizer.encode("আমি বাংলার গান গাই")
>>> tokenizer.decode([93, 1912, 814, 5995, 3, 1])
```
```
[93, 1912, 814, 5995, 3, 1]
'আমি বাংলার গান গাই </s>'
```
## load model
```
>>> config = T5Config.from_pretrained("flax-community/bengali-t5-base")
>>> model = FlaxT5ForConditionalGeneration.from_pretrained("flax-community/bengali-t5-base", config=config)
```
The model is trained on `de-noising` objectives followed by the script [here](https://huggingface.co/flax-community/bengali-t5-base/blob/main/run_t5_mlm_flax.py) and [here](https://huggingface.co/flax-community/bengali-t5-base/blob/main/run.sh). Currently This model doesn't have any generation capability. If you want this model to have generation capability, please do a finetuning on `prefix-LM` objective mentioned in the [paper](https://arxiv.org/abs/1910.10683).
See the tensorboard log in `Training metrics` tab.
Please note that we haven't finetuned the model in any downstream task.
## Proposal
- [Project Proposal](https://discuss.huggingface.co/t/pretrain-t5-from-scratch-in-bengali/7121)
## Participants
- [Ibraheem Muhammad Moosa](https://huggingface.co/ibraheemmoosa)
- [Tasnim Mohiuddin](https://huggingface.co/tasnim)
- [Khalid Saifullah](https://huggingface.co/khalidsaifullaah)
- [Tahsin Mayeesha](https://tahsin-mayeesha.github.io/)
- [M Saiful Bari](https://huggingface.co/sbmaruf)
## Useful links
- [Community Week timeline](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104#summary-timeline-calendar-6)
- [Community Week README](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md)
- [Masked Language Modelling example scripts](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling)
- [Model Repository](https://huggingface.co/flax-community/roberta-base-als-demo)
|
gagan3012/project-code-py-small | df793bd9be993ba5842c664b470b5977bfd2ad49 | 2021-05-21T16:06:24.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | gagan3012 | null | gagan3012/project-code-py-small | 30 | null | transformers | 7,154 | # Leetcode using AI :robot:
GPT-2 Model for Leetcode Questions in python
**Note**: the Answers might not make sense in some cases because of the bias in GPT-2
**Contribtuions:** If you would like to make the model better contributions are welcome Check out [CONTRIBUTIONS.md](https://github.com/gagan3012/project-code-py/blob/master/CONTRIBUTIONS.md)
### 📢 Favour:
It would be highly motivating, if you can STAR⭐ this repo if you find it helpful.
## Model
Two models have been developed for different use cases and they can be found at https://huggingface.co/gagan3012
The model weights can be found here: [GPT-2](https://huggingface.co/gagan3012/project-code-py) and [DistilGPT-2](https://huggingface.co/gagan3012/project-code-py-small)
### Example usage:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gagan3012/project-code-py")
model = AutoModelWithLMHead.from_pretrained("gagan3012/project-code-py")
```
## Demo
[](https://share.streamlit.io/gagan3012/project-code-py/app.py)
A streamlit webapp has been setup to use the model: https://share.streamlit.io/gagan3012/project-code-py/app.py

## Example results:
### Question:
```
Write a function to delete a node in a singly-linked list. You will not be given access to the head of the list, instead you will be given access to the node to be deleted directly. It is guaranteed that the node to be deleted is not a tail node in the list.
```
### Answer:
```python
""" Write a function to delete a node in a singly-linked list. You will not be given access to the head of the list, instead you will be given access to the node to be deleted directly. It is guaranteed that the node to be deleted is not a tail node in the list.
For example,
a = 1->2->3
b = 3->1->2
t = ListNode(-1, 1)
Note: The lexicographic ordering of the nodes in a tree matters. Do not assign values to nodes in a tree.
Example 1:
Input: [1,2,3]
Output: 1->2->5
Explanation: 1->2->3->3->4, then 1->2->5[2] and then 5->1->3->4.
Note:
The length of a linked list will be in the range [1, 1000].
Node.val must be a valid LinkedListNode type.
Both the length and the value of the nodes in a linked list will be in the range [-1000, 1000].
All nodes are distinct.
"""
# Definition for singly-linked list.
# class ListNode:
# def __init__(self, x):
# self.val = x
# self.next = None
class Solution:
def deleteNode(self, head: ListNode, val: int) -> None:
"""
BFS
Linked List
:param head: ListNode
:param val: int
:return: ListNode
"""
if head is not None:
return head
dummy = ListNode(-1, 1)
dummy.next = head
dummy.next.val = val
dummy.next.next = head
dummy.val = ""
s1 = Solution()
print(s1.deleteNode(head))
print(s1.deleteNode(-1))
print(s1.deleteNode(-1))
```
|
google/t5-efficient-mini | 094079e325c02e6dcb4a1d22826599c555e70aa2 | 2022-02-15T10:56:20.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/t5-efficient-mini | 30 | null | transformers | 7,155 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-MINI (Deep-Narrow version)
T5-Efficient-MINI is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-mini** - is of model type **Mini** with no variations.
It has **31.23** million parameters and thus requires *ca.* **124.92 MB** of memory in full precision (*fp32*)
or **62.46 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
helboukkouri/character-bert-medical | 7fef3ced1b1f6f2ba16c6ee3b95f0865c1c28738 | 2021-05-17T10:41:06.000Z | [
"pytorch",
"character_bert",
"transformers"
] | null | false | helboukkouri | null | helboukkouri/character-bert-medical | 30 | 1 | transformers | 7,156 | Entry not found |
huawei-noah/TinyBERT_6L_zh | f6b4e4a4a3937d95ee0509b3f2c03d4d127ba31b | 2020-10-14T09:05:20.000Z | [
"pytorch",
"transformers"
] | null | false | huawei-noah | null | huawei-noah/TinyBERT_6L_zh | 30 | null | transformers | 7,157 | Entry not found |
huggingtweets/messiah_niko | 5b7ac06353b139944334aa4893992b42105d9cdf | 2021-06-07T08:29:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/messiah_niko | 30 | null | transformers | 7,158 | ---
language: en
thumbnail: https://www.huggingtweets.com/messiah_niko/1623054570608/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1323150543460577280/qH9qh3Hg_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">NikoTheMessiah</div>
<div style="text-align: center; font-size: 14px;">@messiah_niko</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from NikoTheMessiah.
| Data | NikoTheMessiah |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 0 |
| Short tweets | 1095 |
| Tweets kept | 2154 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3hqsklu0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @messiah_niko's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2fov69x9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2fov69x9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/messiah_niko')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/nintendoamerica | 3f4b92dc125e23a3bec7557e6ba5e1d4bcfa3a64 | 2021-05-22T16:26:49.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/nintendoamerica | 30 | null | transformers | 7,159 | ---
language: en
thumbnail: https://www.huggingtweets.com/nintendoamerica/1601313308462/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1309477596598431744/Jrcoh81s_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Nintendo of America 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@nintendoamerica bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@nintendoamerica's tweets](https://twitter.com/nintendoamerica).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3217</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>819</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>22</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2376</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/12wbwplr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nintendoamerica's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/yegfsqbf) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/yegfsqbf/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/nintendoamerica'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
jaron-maene/gpt2-medium-nl2bash | fea3a00df96b702af04f6a7aa3f6fbdd7bfb1295 | 2021-05-23T05:42:13.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jaron-maene | null | jaron-maene/gpt2-medium-nl2bash | 30 | null | transformers | 7,160 | Entry not found |
jpwahle/xlnet-base-plagiarism-detection | bd898e8d4cbe9d1d2ebc71397c1694fd4634954f | 2021-09-24T07:44:27.000Z | [
"pytorch",
"xlnet",
"text-classification",
"ISO 639-1 code for your language, or `multilingual`",
"dataset:array of dataset identifiers",
"arxiv:1906.08237",
"transformers",
"array",
"of",
"tags"
] | text-classification | false | jpwahle | null | jpwahle/xlnet-base-plagiarism-detection | 30 | 1 | transformers | 7,161 | ---
language: ISO 639-1 code for your language, or `multilingual`
thumbnail: url to a thumbnail used in social sharing
tags:
- array
- of
- tags
datasets:
- array of dataset identifiers
metrics:
- array of metric identifiers
widget:
- text: Copyright infringement is viewed as an infringement of scholarly uprightness
and a penetrate of editorial morals.
---
# XLNet-LMGC-M for Machine-Paraphrased Plagiarism Detection
This is the checkpoint for LMGC based on XLNet-base after being trained on the Machine-Paraphrased Plagiarism Dataset: [](https://doi.org/10.5281/zenodo.3608000)
Additional information about this model:
* [The xlnet-base model page](https://huggingface.co/xlnet-base-cased)
* [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/pdf/1906.08237.pdf)
The model can be loaded to perform Plagiarism like so:
```py
from transformers import AutoModelForSequenceClassification, AutoTokenizer
AutoModelForSequenceClassification("jpelhaw/xlnet-base-plagiarism-detection")
AutoTokenizer.from_pretrained("jpelhaw/xlnet-base-plagiarism-detection")
input = 'Copyright infringement is viewed as an infringement of scholarly uprightness and a penetrate of editorial morals.'
example = tokenizer.tokenize(input, add_special_tokens=True)
answer = model(**example)
# "plagiarism"
``` |
keshan/sinhala-roberta-mc4 | a3f63861ccb8c137facaed07fe9c1e3c8a48d148 | 2021-09-23T16:05:07.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"si",
"transformers",
"sinhala",
"license:cc-by-4.0",
"autotrain_compatible"
] | fill-mask | false | keshan | null | keshan/sinhala-roberta-mc4 | 30 | null | transformers | 7,162 | ---
language: si
license: cc-by-4.0
tags:
- sinhala
- roberta
pipeline_tag: fill-mask
widget:
- text: මම සිංහල භාෂාව <mask>
---
# Sinhala roberta on mc4 dataset
|
kz/mt5base-finetuned-ECC-japanese-small | c9b54a17f1652d39c3a1486a6712e094cca47031 | 2022-05-26T13:50:56.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"ja",
"arxiv:2201.11903",
"transformers",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | kz | null | kz/mt5base-finetuned-ECC-japanese-small | 30 | 1 | transformers | 7,163 | ---
language: "ja"
widget:
- text: "吾輩をは猫である。を書いた作家は,夏目漱 <extra_id_0>"
- text: "吾輩をは猫である。名前えはまだない。"
- text: "translate japanese to english: 赤い花. => red flower. 青い花. => <extra_id_0>"
license: "mit"
---
Google's mt5-base fine-tuned in Japanese to solve error detection and correction task.
# 日本語誤り訂正
- "吾輩をは猫である。名前えはまだない。"→"吾輩は猫である。名前はまだない。"
- "-small" has been trained on 20,000 text pairs only.
- dataset: [link](http://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9EWikipedia%E5%85%A5%E5%8A%9B%E8%AA%A4%E3%82%8A%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88) *used only first 20,000 text pairs.
- prefix: "correction: " (notice: single task trained.)
- text-to-textのお気持ち体験版ぐらいの感覚でどうぞ.
## 参考
- "東北大学でMASKが研究をしています。"→"東北大学でMASKの研究をしています。" ジム・キャリーを主語とした唯一のガ格が消され、ジム・キャリーは研究対象となった。易読化のために用いられる主語と動詞を近づける記法は誤り扱い?
- "東北大学でマスクが研究をしています。"→"東北大学でマスクの研究をしています。"
- "東北大学でイーロン・マスクが研究をしています。"→"東北大学でイーロン・マスクが研究をしています。"
- "東北大学で「イーロン・マスク」が研究をしています。"→"東北大学で「イーロン・マスク」の研究をしています。" 単語の意味も考慮されている?
- "東北大学でイマスクが研究をしています。"→"東北大学でイマスクの研究をしています。"
- "東北大学でクが研究をしています。"→"東北大学でコンピューターが研究をしています。" それはちょっと待って。
## 参考 extra_idを用い探索 <>は半角に変更してください
- "東北大学で <extra_id_0> の研究をしています。"→"東北大学で化学の研究をしています。"
- "東北大学で <extra_id_0> が研究をしています。"→"東北大学で工学が研究をしています。" 工学さん。
- "吾輩は <extra_id_0> である。"→"吾輩は吾輩である。"
- "答えは猫です。吾輩は <extra_id_0> である。"→"答えは猫です。吾輩は猫である。"
- "答えは猫です。吾輩の <extra_id_0> である。"→"答えは猫です。吾輩の心は猫である。"
- "私は猫です。私は <extra_id_0>"→"私は猫です。私は猫です。"
- "私は猫です。N/A <extra_id_0>"→"猫です。"
- "あなたは女性で猫です。彼は犬です。彼女は <extra_id_0>"→"あなたは女性で猫です。彼は犬です。彼女は猫です。"
- "あなたは女性で猫です。彼は犬です。彼は <extra_id_0>"→"あなたは女性で猫です。彼は犬です。"
- "あなたは女性で猫です。彼は犬です。彼は男性で <extra_id_0>"→"あなたは女性で猫です。彼は犬です。彼は男性で猫です。"
- "あなたは女性で猫です。彼は犬です。ライオンは <extra_id_0>"→"あなたは女性で猫です。彼は犬です。ライオンは猫です。"
- "あなたがは女性で猫です。彼はが犬です。ライオンが <extra_id_0>"→"あなたが女性で猫です。彼は犬です。ライオンが犬です。"
- "Aは11、Bは9。Aは <extra_id_0> 。Bは <extra_id_1> 。"→"Aは11、Bは9。Aは11。Bは9。"
- "彼の名前はallenです。彼のnameは <extra_id_0>"→"彼の名前はallenです。彼の名前は英語です。"
- "translate japanease to english: 赤い花. => red flower. 青い花. => <extra_id_0>"→"赤い花. => red flower. 青い花. => blue flower" タスク比依存翻訳可能性の片鱗.japaneseをjapaneaseと間違えたことは秘密だ・・・と言うか間違えても動くのか
## Prompting参考
Chain of Thought Prompting Elicits Reasoning in Large Language Models
https://arxiv.org/abs/2201.11903
**check in progress**
## Licenese
- The MIT license |
lucio/xls-r-uyghur-cv8 | 3aeee04baa659b20258b57368f80ddad58ac6ccb | 2022-03-23T18:28:37.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ug",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lucio | null | lucio/xls-r-uyghur-cv8 | 30 | 1 | transformers | 7,164 | ---
language:
- ug
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- ug
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M Uyghur CV8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ug
metrics:
- name: Test WER
type: wer
value: 30.5
- name: Test CER
type: cer
value: 5.8
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M Uyghur CV8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2026
- Wer: 0.3248
## Model description
For a description of the model architecture, see [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m)
The model vocabulary consists of the alphabetic characters of the [Perso-Arabic script for the Uyghur language](https://omniglot.com/writing/uyghur.htm), with punctuation removed.
## Intended uses & limitations
This model is expected to be of some utility for low-fidelity use cases such as:
- Draft video captions
- Indexing of recorded broadcasts
The model is not reliable enough to use as a substitute for live captions for accessibility purposes, and it should not be used in a manner that would infringe the privacy of any of the contributors to the Common Voice dataset nor any other speakers.
## Training and evaluation data
The combination of `train` and `dev` of common voice official splits were used as training data. The official `test` split was used as validation data as well as for final evaluation.
## Training procedure
The featurization layers of the XLS-R model are frozen while tuning a final CTC/LM layer on the Uyghur CV8 example sentences. A ramped learning rate is used with an initial warmup phase of 2000 steps, a max of 0.0001, and cooling back towards 0 for the remainder of the 9400 steps (100 epochs).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3036 | 5.32 | 500 | 3.2628 | 1.0 |
| 2.9734 | 10.63 | 1000 | 2.5677 | 0.9980 |
| 1.3466 | 15.95 | 1500 | 0.4455 | 0.6306 |
| 1.2424 | 21.28 | 2000 | 0.3603 | 0.5301 |
| 1.1655 | 26.59 | 2500 | 0.3165 | 0.4740 |
| 1.1026 | 31.91 | 3000 | 0.2930 | 0.4400 |
| 1.0655 | 37.23 | 3500 | 0.2675 | 0.4159 |
| 1.0239 | 42.55 | 4000 | 0.2580 | 0.3913 |
| 0.9938 | 47.87 | 4500 | 0.2373 | 0.3698 |
| 0.9655 | 53.19 | 5000 | 0.2379 | 0.3675 |
| 0.9374 | 58.51 | 5500 | 0.2486 | 0.3795 |
| 0.9065 | 63.83 | 6000 | 0.2243 | 0.3405 |
| 0.888 | 69.15 | 6500 | 0.2157 | 0.3277 |
| 0.8646 | 74.47 | 7000 | 0.2103 | 0.3288 |
| 0.8602 | 79.78 | 7500 | 0.2088 | 0.3238 |
| 0.8442 | 85.11 | 8000 | 0.2045 | 0.3266 |
| 0.8335 | 90.42 | 8500 | 0.2038 | 0.3241 |
| 0.8288 | 95.74 | 9000 | 0.2024 | 0.3280 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
mpoyraz/wav2vec2-xls-r-300m-cv8-turkish | bdd0bb878d8bf59509611631e334eb073ba57773 | 2022-03-23T18:29:03.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"common_voice",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | mpoyraz | null | mpoyraz/wav2vec2-xls-r-300m-cv8-turkish | 30 | 1 | transformers | 7,165 | ---
license: apache-2.0
language: tr
tags:
- automatic-speech-recognition
- common_voice
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- tr
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: mpoyraz/wav2vec2-xls-r-300m-cv8-turkish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: tr
metrics:
- name: Test WER
type: wer
value: 10.61
- name: Test CER
type: cer
value: 2.67
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: tr
metrics:
- name: Test WER
type: wer
value: 36.46
- name: Test CER
type: cer
value: 12.38
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: tr
metrics:
- name: Test WER
type: wer
value: 40.91
---
# wav2vec2-xls-r-300m-cv8-turkish
## Model description
This ASR model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on Turkish language.
## Training and evaluation data
The following datasets were used for finetuning:
- [Common Voice 8.0 TR](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) All `validated` split except `test` split was used for training.
## Training procedure
To support the datasets above, custom pre-processing and loading steps was performed and [wav2vec2-turkish](https://github.com/mpoyraz/wav2vec2-turkish) repo was used for that purpose.
### Training hyperparameters
The following hypermaters were used for finetuning:
- learning_rate 2.5e-4
- num_train_epochs 20
- warmup_steps 500
- freeze_feature_extractor
- mask_time_prob 0.1
- mask_feature_prob 0.1
- feat_proj_dropout 0.05
- attention_dropout 0.05
- final_dropout 0.1
- activation_dropout 0.05
- per_device_train_batch_size 8
- per_device_eval_batch_size 8
- gradient_accumulation_steps 8
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
## Language Model
N-gram language model is trained on a Turkish Wikipedia articles using KenLM and [ngram-lm-wiki](https://github.com/mpoyraz/ngram-lm-wiki) repo was used to generate arpa LM and convert it into binary format.
## Evaluation Commands
Please install [unicode_tr](https://pypi.org/project/unicode_tr/) package before running evaluation. It is used for Turkish text processing.
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id mpoyraz/wav2vec2-xls-r-300m-cv8-turkish --dataset mozilla-foundation/common_voice_8_0 --config tr --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id mpoyraz/wav2vec2-xls-r-300m-cv8-turkish --dataset speech-recognition-community-v2/dev_data --config tr --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Evaluation results:
| Dataset | WER | CER |
|---|---|---|
|Common Voice 8 TR test split| 10.61 | 2.67 |
|Speech Recognition Community dev data| 36.46 | 12.38 |
|
mrm8488/distilroberta-finetuned-banking77 | 827f854e4028255343744993763901683c0cab8d | 2021-08-21T05:29:11.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:banking77",
"transformers",
"banking",
"intent",
"multiclass"
] | text-classification | false | mrm8488 | null | mrm8488/distilroberta-finetuned-banking77 | 30 | 3 | transformers | 7,166 | ---
language: en
tags:
- banking
- intent
- multiclass
datasets:
- banking77
widget:
- text: "How long until my transfer goes through?"
---
# distilroberta-base fine-tuned on banking77 dataset for intent classification
Test set accuray: 0.896
## How to use
```py
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
ckpt = 'mrm8488/distilroberta-finetuned-banking77'
tokenizer = AutoTokenizer.from_pretrained(ckpt)
model = AutoModelForSequenceClassification.from_pretrained(ckpt)
classifier = pipeline('text-classification', tokenizer=tokenizer, model=model)
classifier('What is the base of the exchange rates?')
# Output: [{'label': 'exchange_rate', 'score': 0.8509947657585144}]
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain |
pere/nb-nn-translation | 46f078914489b4a00b6f493c0c3e93cf9be53c68 | 2021-09-23T16:19:21.000Z | [
"pytorch",
"jax",
"no",
"dataset:oscar",
"translation",
"license:cc-by-4.0"
] | translation | false | pere | null | pere/nb-nn-translation | 30 | 2 | null | 7,167 | ---
language: no
license: cc-by-4.0
tags:
- translation
datasets:
- oscar
widget:
- text: Skriv inn en tekst som du ønsker å oversette til en annen målform.
---
# 🇳🇴 Bokmål ⇔ Nynorsk 🇳🇴
Norwegian has two relatively similar written languages; Bokmål and Nynorsk. Historically Nynorsk is a written norm based on dialects curated by the linguist Ivar Aasen in the mid-to-late 1800s, whereas Bokmål is a gradual 'Norwegization' of written Danish.
The two written languages are considered equal and citizens have a right to receive public service information in their primary and prefered language. Even though this right has been around for a long time only between 5-10% of Norwegian texts are written in Nynorsk. Nynorsk is therefore a low-resource language within a low-resource language.
Apart from some word-list based engines, there are not any working off-the-shelf machine learning-based translation models. Translation between Bokmål and Nynorsk is not available in Google Translate.
## Demo
| | |
|---|---|
| Widget | Try the widget in the top right corner |
| Huggingface Spaces | [Spaces Demo](https://huggingface.co/spaces/NbAiLab/nb2nn) |
| | |
## Pretraining a T5-base
There is an [mt5](https://huggingface.co/google/mt5-base) that includes Norwegian. Unfortunately a very small part of this is Nynorsk; there is only around 1GB Nynorsk text in mC4. Despite this, the mt5 also gives a BLEU score above 80. During the project we extracted all available Nynorsk text from the [Norwegian Colossal Corpus](https://github.com/NBAiLab/notram/blob/master/guides/corpus_v2_summary.md) at the National Library of Norway, and matched it (by material type i.e. book, newspapers and so on) with an equal amount of Bokmål. The corpus collection is described [here](https://github.com/NBAiLab/notram/blob/master/guides/nb_nn_balanced_corpus.md) and the total size is 19GB.
## Finetuning - BLEU-SCORE 88.17 🎉
The central finetuning data of the project have been 200k translation units (TU) i.e. aligned pairs of sentences in the respective languages extracted from textbooks of various subjects and newspapers.
Training for [10] epochs with a learning rate of [7e-4], a batch size of [32] and a max source and target length of [512] fine tuning reached a SACREBLEU score of [88.03] at training and a test score of [**88.17**] after training.
## This is not a translator
We found out that we were able to get almost identical BLEU-score with training it both ways, and letting the model decide if the input is in Bokmål or Nynorsk. This way we can train one model instead of two. We call it a language switcher.
## Future work
The following Google Docs Add-on is currently pending approval.

## How to use the model
```python
# Set up the pipeline
from transformers import pipeline
translator = pipeline("translation", model='pere/nb-nn-translation')
# Do the translation
text = "Hun vil ikke gi bort sine personlige data."
print(translator(text, max_length=255))
``` |
persiannlp/wikibert-base-parsinlu-entailment | 0c1bcd3dae471681977503e8381714a5aed5b05b | 2021-09-23T16:20:55.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"fa",
"multilingual",
"dataset:parsinlu",
"transformers",
"entailment",
"wikibert",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0"
] | text-classification | false | persiannlp | null | persiannlp/wikibert-base-parsinlu-entailment | 30 | null | transformers | 7,168 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- entailment
- wikibert
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Textual Entailment (مدل برای پاسخ به استلزام منطقی)
This is a model for textual entailment problems.
Here is an example of how you can run this model:
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import numpy as np
labels = ["entails", "contradicts", "neutral"]
model_name_or_path = "persiannlp/wikibert-base-parsinlu-entailment"
model = AutoModelForSequenceClassification.from_pretrained(model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path,)
def model_predict(text_a, text_b):
features = tokenizer( [(text_a, text_b)], padding="max_length", truncation=True, return_tensors='pt')
output = model(**features)
logits = output[0]
probs = torch.nn.functional.softmax(logits, dim=1).tolist()
idx = np.argmax(np.array(probs))
print(labels[idx], probs)
model_predict(
"این مسابقات بین آوریل و دسامبر در هیپودروم ولیفندی در نزدیکی باکرکی ، ۱۵ کیلومتری (۹ مایل) غرب استانبول برگزار می شود.",
"در ولیفندی هیپودروم، مسابقاتی از آوریل تا دسامبر وجود دارد."
)
model_predict(
"آیا کودکانی وجود دارند که نیاز به سرگرمی دارند؟",
"هیچ کودکی هرگز نمی خواهد سرگرم شود.",
)
model_predict(
"ما به سفرهایی رفته ایم که در نهرهایی شنا کرده ایم",
"علاوه بر استحمام در نهرها ، ما به اسپا ها و سونا ها نیز رفته ایم."
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
pmthangk09/bert-base-uncased-glue-cola | 8ee606fb48f03794a5ef549205f05ae48370d0dc | 2021-05-20T02:47:36.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | pmthangk09 | null | pmthangk09/bert-base-uncased-glue-cola | 30 | null | transformers | 7,169 | Entry not found |
popcornell/FasNetTAC-paper | d87f356489a4d280948080f808c3f02280e97a0c | 2021-09-23T16:21:33.000Z | [
"pytorch",
"dataset:TACDataset",
"dataset:sep_noisy",
"asteroid",
"audio",
"FasNet-TAC",
"audio-to-audio",
"multichannel",
"beamforming",
"license:cc-by-sa-4.0"
] | audio-to-audio | false | popcornell | null | popcornell/FasNetTAC-paper | 30 | 2 | asteroid | 7,170 | ---
tags:
- asteroid
- audio
- FasNet-TAC
- audio-to-audio
- multichannel
- beamforming
datasets:
- TACDataset
- sep_noisy
license: cc-by-sa-4.0
---
## Asteroid model `Samuele Cornell/FasNetTAC_TACDataset_separatenoisy`
Imported from [Zenodo](https://zenodo.org/record/4557489)
### Description:
This model was trained by popcornell using the TAC/TAC recipe in Asteroid. It was trained on the separate_noisy task of the TACDataset dataset.
### Training config:
```yaml
data:
dev_json: ./data/validation.json
sample_rate: 16000
segment: None
test_json: ./data/test.json
train_json: ./data/train.json
net:
chunk_size: 50
context_ms: 16
enc_dim: 64
feature_dim: 64
hidden_dim: 128
hop_size: 25
n_layers: 4
n_src: 2
window_ms: 4
optim:
lr: 0.001
weight_decay: 1e-06
training:
accumulate_batches: 1
batch_size: 8
early_stop: True
epochs: 200
gradient_clipping: 5
half_lr: True
num_workers: 8
patience: 30
save_top_k: 10
```
### Results:
```yaml
si_sdr: 10.871864315894744
si_sdr_imp: 11.322284052560262
```
### License notice:
This work "FasNetTAC_TACDataset_separatenoisy" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov, used under CC BY 4.0; of End-to-end Microphone Permutation and Number Invariant Multi-channel Speech Separation by Yi Luo, Zhuo Chen, Nima Mesgarani, Takuya Yoshioka, used under CC BY 4.0. "FasNetTAC_TACDataset_separatenoisy" is licensed under Attribution-ShareAlike 3.0 Unported by popcornell.
|
saichandrapandraju/t5_small_tabqgen | 9ccd2c9af3c17e499d5bd682f9ebf21291aaff3b | 2021-06-23T14:04:49.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | saichandrapandraju | null | saichandrapandraju/t5_small_tabqgen | 30 | null | transformers | 7,171 | Entry not found |
sentence-transformers/nli-bert-large | 8a3386060e2c164316539309ae31e9c5c76e648a | 2021-08-05T08:27:47.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/nli-bert-large | 30 | null | sentence-transformers | 7,172 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/nli-bert-large
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/nli-bert-large')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/nli-bert-large')
model = AutoModel.from_pretrained('sentence-transformers/nli-bert-large')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/nli-bert-large)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
severinsimmler/german-press-bert | 62bba72d77173c958ee65e09772bbd5fae8703ca | 2021-05-20T05:46:27.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | severinsimmler | null | severinsimmler/german-press-bert | 30 | null | transformers | 7,173 | Entry not found |
sonoisa/sentence-t5-base-ja-mean-tokens | 72d9b3d7fac5b92daa1e7a214abe1266ca9a6571 | 2022-07-28T05:20:27.000Z | [
"pytorch",
"t5",
"feature-extraction",
"ja",
"sentence-transformers",
"sentence-t5",
"sentence-similarity",
"license:cc-by-sa-4.0"
] | feature-extraction | false | sonoisa | null | sonoisa/sentence-t5-base-ja-mean-tokens | 30 | null | sentence-transformers | 7,174 | ---
language: ja
license: cc-by-sa-4.0
tags:
- sentence-transformers
- sentence-t5
- feature-extraction
- sentence-similarity
---
This is a Japanese sentence-T5 model.
日本語用Sentence-T5モデルです。
事前学習済みモデルとして[sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese)を利用しました。
推論の実行にはsentencepieceが必要です(pip install sentencepiece)。
手元の非公開データセットでは、精度は[sonoisa/sentence-bert-base-ja-mean-tokens](https://huggingface.co/sonoisa/sentence-bert-base-ja-mean-tokens)と同程度です。
# 使い方
```python
from transformers import T5Tokenizer, T5Model
import torch
class SentenceT5:
def __init__(self, model_name_or_path, device=None):
self.tokenizer = T5Tokenizer.from_pretrained(model_name_or_path, is_fast=False)
self.model = T5Model.from_pretrained(model_name_or_path).encoder
self.model.eval()
if device is None:
device = "cuda" if torch.cuda.is_available() else "cpu"
self.device = torch.device(device)
self.model.to(device)
def _mean_pooling(self, model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
@torch.no_grad()
def encode(self, sentences, batch_size=8):
all_embeddings = []
iterator = range(0, len(sentences), batch_size)
for batch_idx in iterator:
batch = sentences[batch_idx:batch_idx + batch_size]
encoded_input = self.tokenizer.batch_encode_plus(batch, padding="longest",
truncation=True, return_tensors="pt").to(self.device)
model_output = self.model(**encoded_input)
sentence_embeddings = self._mean_pooling(model_output, encoded_input["attention_mask"]).to('cpu')
all_embeddings.extend(sentence_embeddings)
return torch.stack(all_embeddings)
MODEL_NAME = "sonoisa/sentence-t5-base-ja-mean-tokens"
model = SentenceT5(MODEL_NAME)
sentences = ["暴走したAI", "暴走した人工知能"]
sentence_embeddings = model.encode(sentences, batch_size=8)
print("Sentence embeddings:", sentence_embeddings)
```
|
stmnk/codet5-small-code-summarization-python | 9fc1e2ca74def81717c2e82ee70c3de83419013e | 2021-11-19T17:50:27.000Z | [
"pytorch",
"t5",
"text2text-generation",
"py",
"en",
"dataset:code_x_glue_ct_code_to_text",
"dataset:code_x_glue_ct_code_to_text (python)",
"transformers",
"Code2TextGeneration",
"Code2TextSummarisation",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | stmnk | null | stmnk/codet5-small-code-summarization-python | 30 | null | transformers | 7,175 | ---
language:
- py
- en
thumbnail: "url to a thumbnail used in social sharing"
tags:
- Code2TextGeneration
- Code2TextSummarisation
license: apache-2.0
datasets:
- code_x_glue_ct_code_to_text
- code_x_glue_ct_code_to_text (python)
metrics:
- code-x-bleu
---
pretrained model: https://huggingface.co/Salesforce/codet5-small
finetuning dataset: https://huggingface.co/datasets/code_x_glue_ct_code_to_text (only the python split)
official inference check point (for comparison, using base, not small, size): https://storage.googleapis.com/sfr-codet5-data-research/finetuned_models/summarize_python_codet5_base.bin
for fine-tuning process metrics see [this w&b report](https://wandb.ai/stmnk/CodeT5/reports/Code-T5-code_x_glue_code2text--VmlldzoxMjM4MTUy?accessToken=5stsbg6bn2x0m6svrosxtq0zv3vhlgzr4cjcyapw52xq5puc09wo6f8li40ln7fm)
<!-- <iframe src="https://wandb.ai/stmnk/CodeT5/reports/Code-T5-code_x_glue_code2text--VmlldzoxMjM4MTUy" style="border:none;height:1024px;width:100%"> -->
|
tals/albert-xlarge-vitaminc-fever | 7496f8f6c4350e2b151ef982e6d8903fbd77c8c9 | 2022-06-22T23:55:46.000Z | [
"pytorch",
"albert",
"text-classification",
"python",
"dataset:fever",
"dataset:glue",
"dataset:tals/vitaminc",
"transformers"
] | text-classification | false | tals | null | tals/albert-xlarge-vitaminc-fever | 30 | null | transformers | 7,176 | ---
language: python
datasets:
- fever
- glue
- tals/vitaminc
---
# Details
Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`).
For more details see: https://github.com/TalSchuster/VitaminC
When using this model, please cite the paper.
# BibTeX entry and citation info
```bibtex
@inproceedings{schuster-etal-2021-get,
title = "Get Your Vitamin {C}! Robust Fact Verification with Contrastive Evidence",
author = "Schuster, Tal and
Fisch, Adam and
Barzilay, Regina",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.52",
doi = "10.18653/v1/2021.naacl-main.52",
pages = "624--643",
abstract = "Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness{---}improving accuracy by 10{\%} on adversarial fact verification and 6{\%} on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.",
}
```
|
tesemnikov-av/NER-RUBERT-Per-Loc-Org | 9c9c1bd59cbb1a61999b9735dd73af8ec0ad7ba4 | 2022-02-04T19:40:56.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tesemnikov-av | null | tesemnikov-av/NER-RUBERT-Per-Loc-Org | 30 | null | transformers | 7,177 | ---
widget:
- text: "В город Сергиев Посад приехал Курт Кобейн."
---
Fine-tuning [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) model on sentences from Wiki auto annotated with PER, LOC, ORG tags [corus/WiNER](https://pypi.org/project/corus/#reference)
language: RU
NER Class:
- PER
- LOC
- ORG
license: mit
|
vachonni/wav2vec2-large-xls-r-300m-dansk-CV-80 | 85f4ac5dfea7d89e1b7114f1597294798a626cd8 | 2022-02-01T07:55:36.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | vachonni | null | vachonni/wav2vec2-large-xls-r-300m-dansk-CV-80 | 30 | 2 | transformers | 7,178 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-dansk-CV-80
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-dansk-CV-80
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for Danish, using the [mozilla-foundation/common_voice_8_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6394
- eval_wer: 0.3682
- eval_runtime: 104.0466
- eval_samples_per_second: 13.359
- eval_steps_per_second: 1.672
- epoch: 21.28
- step: 2000
## Model description
ASR Danish model
## Intended uses & limitations
More information needed
## Training and evaluation data
Danish subset of [mozilla-foundation/common_voice_8_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
valhalla/electra-base-discriminator-finetuned_squadv1 | e70021ecbe89606f35b8d7c2e37fa86ff6cc60cc | 2020-12-11T22:03:34.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | valhalla | null | valhalla/electra-base-discriminator-finetuned_squadv1 | 30 | null | transformers | 7,179 | # ELECTRA-BASE-DISCRIMINATOR finetuned on SQuADv1
This is electra-base-discriminator model finetuned on SQuADv1 dataset for for question answering task.
## Model details
As mentioned in the original paper: ELECTRA is a new method for self-supervised language representation learning.
It can be used to pre-train transformer networks using relatively little compute.
ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network,
similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU.
At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.
| Param | #Value |
|---------------------|--------|
| layers | 12 |
| hidden size | 768 |
| num attetion heads | 12 |
| on disk size | 436MB |
## Model training
This model was trained on google colab v100 GPU.
You can find the fine-tuning colab here
[](https://colab.research.google.com/drive/11yo-LaFsgggwmDSy2P8zD3tzf5cCb-DU?usp=sharing).
## Results
The results are actually slightly better than given in the paper.
In the paper the authors mentioned that electra-base achieves 84.5 EM and 90.8 F1
| Metric | #Value |
|--------|--------|
| EM | 85.0520|
| F1 | 91.6050|
## Model in Action 🚀
```python3
from transformers import pipeline
nlp = pipeline('question-answering', model='valhalla/electra-base-discriminator-finetuned_squadv1')
nlp({
'question': 'What is the answer to everything ?',
'context': '42 is the answer to life the universe and everything'
})
=> {'answer': '42', 'end': 2, 'score': 0.981274963050339, 'start': 0}
```
> Created with ❤️ by Suraj Patil [](https://github.com/patil-suraj/)
[](https://twitter.com/psuraj28)
|
w11wo/malaysian-distilbert-small | 5cdb75e9d2059fad16ffb3df956241ff30f6ed12 | 2021-07-11T15:56:09.000Z | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"ms",
"dataset:oscar",
"arxiv:1910.01108",
"transformers",
"malaysian-distilbert-small",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | w11wo | null | w11wo/malaysian-distilbert-small | 30 | null | transformers | 7,180 | ---
language: ms
tags:
- malaysian-distilbert-small
license: mit
datasets:
- oscar
widget:
- text: "Hari ini adalah hari yang [MASK]!"
---
## Malaysian DistilBERT Small
Malaysian DistilBERT Small is a masked language model based on the [DistilBERT model](https://arxiv.org/abs/1910.01108). It was trained on the [OSCAR](https://huggingface.co/datasets/oscar) dataset, specifically the `unshuffled_original_ms` subset.
The model was originally HuggingFace's pretrained [English DistilBERT model](https://huggingface.co/distilbert-base-uncased) and is later fine-tuned on the Malaysian dataset. It achieved a perplexity of 10.33 on the validation dataset (20% of the dataset). Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger), and [fine-tuning tutorial notebook](https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb) written by [Pierre Guillou](https://huggingface.co/pierreguillou).
Hugging Face's [Transformers](https://huggingface.co/transformers) library was used to train the model -- utilizing the base DistilBERT model and their `Trainer` class. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|------------------------------|---------|------------------|----------------------------------------|
| `malaysian-distilbert-small` | 66M | DistilBERT Small | OSCAR `unshuffled_original_ms` Dataset |
## Evaluation Results
The model was trained for 1 epoch and the following is the final result once the training ended.
| train loss | valid loss | perplexity | total time |
|------------|------------|------------|------------|
| 2.476 | 2.336 | 10.33 | 0:40:05 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/malaysian-distilbert-small"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Henry adalah seorang lelaki yang tinggal di [MASK].")
```
### Feature Extraction in PyTorch
```python
from transformers import DistilBertModel, DistilBertTokenizerFast
pretrained_name = "w11wo/malaysian-distilbert-small"
model = DistilBertModel.from_pretrained(pretrained_name)
tokenizer = DistilBertTokenizerFast.from_pretrained(pretrained_name)
prompt = "Bolehkah anda [MASK] Bahasa Melayu?"
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do consider the biases which came from the OSCAR dataset that may be carried over into the results of this model.
## Author
Malaysian DistilBERT Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access. |
wietsedv/bert-base-multilingual-cased-finetuned-udlassy-ner | 74fd633e7b8efef418a1cb1aaf75230208375357 | 2021-05-20T09:16:27.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/bert-base-multilingual-cased-finetuned-udlassy-ner | 30 | null | transformers | 7,181 | Entry not found |
mmaguero/gn-bert-tiny-cased | 3163de7fea4aaddb4074c3a4b58fbf27a4807dc7 | 2022-03-06T08:09:35.000Z | [
"pytorch",
"bert",
"fill-mask",
"gn",
"dataset:wikipedia",
"dataset:wiktionary",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | mmaguero | null | mmaguero/gn-bert-tiny-cased | 30 | null | transformers | 7,182 | ---
language: gn
license: mit
datasets:
- wikipedia
- wiktionary
widget:
- text: "Paraguay ha'e peteĩ táva oĩva [MASK] retãme "
---
# BERT-i-tiny-cased (gnBERT-tiny-cased)
A pre-trained BERT model for **Guarani** (2 layers, cased). Trained on Wikipedia + Wiktionary (~800K tokens).
|
hackathon-pln-es/jurisbert-class-tratados-internacionales-sistema-universal | 575e1de2b39397c7ee6adcd0ee25cdf41b294a64 | 2022-03-28T19:02:27.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"es",
"transformers",
"license:cc-by-nc-4.0"
] | text-classification | false | hackathon-pln-es | null | hackathon-pln-es/jurisbert-class-tratados-internacionales-sistema-universal | 30 | 3 | transformers | 7,183 | ---
license: cc-by-nc-4.0
language: es
widget:
- text: "A los 4 Civiles de Rosarito se les acusó de cometer varios delitos federales en flagrancia, aunque se ha comprobado que no fueron detenidos en el lugar en el que los militares señalaron en su parte informativo. Las cuatro personas refieren que el 17 de junio de 2009 fueron obligados a firmar sus declaraciones ante el Ministerio Público mediante torturas y con los ojos vendados. A pesar de haberlos visto severamente golpeados, el agente del Ministerio Público determinó que debían seguir bajo custodia militar."
---
## Descripción del modelo
jurisbert-class-tratados-internacionales-sistema-unviersal es un modelo de clasificación de texto entrenado en un corpus de datos en español de manera supervisada.
Este modelo fue entrenado con Jurisbert un modelo de enmascaramiento preentrenado con un corpus jurídico en español.
Por lo tanto, nuestro jurisbert-class-tratados-internacionales-sistema-unviersal toma el texto que se le está dando y predice cuál de las 8 convenciones de la ONU tiene más que ver con tu texto en español:
1) Convención Internacional sobre la Protección de los Derechos de todos los Trabajadores Migratorios y de sus Familias
2) Convención de los Derechos del Niño
3) Convención sobre la Eliminación de todas las formas de Discriminación contra la Mujer
4) Pacto Internacional de Derechos Civiles y Políticos
5) Convención Internacional Sobre la Eliminación de Todas las Formas de Discriminación Racial
6) Convención contra la Tortura y otros Tratos o Penas Crueles, Inhumanos o Degradantes
7) Convención sobre los Derechos de las Personas con Discapacidad
8) Pacto Internacional de Derechos Económicos, Sociales y Culturales
## Usos previstos y limitaciones
Puede usar el modelo para obtener los artículos de la ONU que tengan más relación al texto que está introduciendo.
Tenga en cuenta que este modelo está destinado principalmente a ajustarse en tareas de clasificación, cuando quiera obtener principalmente que artículos tienen mayor relación a su tema en cuestión.
## Cómo utilizar
```python
#Puede usar este modelo directamente con SimpleTransformers :
#Para instalar SimpleTransformers:
pip install simpletransformers
from simpletransformers.classification import ClassificationModel
# Creando un ClassificationModel
model = ClassificationModel(
"roberta", "hackathon-pln-es/jurisbert-class-tratados-internacionales-sistema-unviersal", use_cuda=True
)
predecir = ["adoptar a un niño"]
predictions, raw_outputs = model.predict(predecir)
predictions
```
## Datos de entrenamiento
El modelo jurisbert-class-tratados-internacionales-sistema-unviersal se entrenó previamente en un conjunto de datos que consta de 3,799 textos con su etiquetado a diferentes 8 tipos de convenios.
## Procedimiento de entrenamiento
Los textos se transforman utilizando SimpleTransformers en el que se entrenó tres épocas con modelo base Roberta y modelo especifico Jurisbert el cual es un modelo de enmascaramiento con corpus jurídico en español.
## Variables y métricas
Para entrenar se usaron el 90% (3,799) de nuestros datos, al hacer la evaluación:
Train: 3419
Test: 380
## Resultados de evaluación
| | precision | recall | f1-score | support |
|---|---|---|---|---|
| accuracy | | |0.91 | 380 |
| macro avg | 0.92 |0.91 |0.91 | 380 |
| weighted avg | 0.91 | 0.91 |0.91 | 380 |
Accuracy: 0.9105
## Equipo
El equipo esta conformado por @gpalomeque @aureliopvs @cecilimacias @giomadariaga @cattsytabla |
ai4bharat/MultiIndicHeadlineGeneration | 1332a232585ce151009c5f577cc1418d99f746be | 2022-05-06T10:39:48.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2203.05437",
"transformers",
"multilingual",
"nlp",
"indicnlp",
"autotrain_compatible"
] | text2text-generation | false | ai4bharat | null | ai4bharat/MultiIndicHeadlineGeneration | 30 | null | transformers | 7,184 |
---
languages:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
tags:
- multilingual
- nlp
- indicnlp
widget:
- text: वैश्विक व्यापार युद्ध की शिकार हुई तुर्की की मुद्रा लीरा के डूबने से अमेरिकी डॉलर के मुकाबले रुपया अब तक के न्यूनतम स्तर पर पहुंच गया। रुपये में रिकॉर्ड गिरावट से सोने की चमक में निखार नहीं आ सकी। वैश्विक बाजार में सोना करीब आठ महीने के निचले स्तर पर पहुंच गया तो घरेलू बाजार में यह करीब नौ महीने के निचले स्तर पर चला गया। वैश्विक मंदी की आशंका से वैश्विक बाजार में चांदी करीब ढाई साल और घरेलू बाजार में तकरीबन नौ महीने के निचले स्तर पर पहुंच गई। तुर्की की आर्थिक चिंता के कारण अमेरिकी डॉलर के मुकाबले रुपया कारोबार के दौरान 70.80 के स्तर तक गिर गया। यह इसका ऐतिहासिक रिकॉर्ड निम्न स्तर है। कमजोर रुपये से सोने की चमक बढऩे की उम्मीद की जा रही थी लेकिन वैश्विक बाजार में सोने की कीमत गिरकर 1,193.50 डॉलर प्रति औंस पहुंचने के कारण घरेलू बाजार में भी सोने की चमक फीकी पड़ गई। घरेलू बाजार में सोना गिरकर 29,655 रुपये प्रति 10 ग्राम पहुंच गया। घरेलू वायदा बाजार यानी एमसीएक्स पर सोना 29,700 के आस-पास कारोबार कर रहा है। देश में इस साल सोने की मांग में लगातार गिरावट देखने को मिल रही थी। अप्रैल-जून तिमाही में सोने का आयात 25 फीसदी से भी कम हुआ है। चालू महीने में सोने की मांग बढऩे की उम्मीद जगी थी लेकिन यह उम्मीद टूट सकती है क्योंकि दुनिया के सबसे बड़े गोल्ड फंड एसपीडीआर गोल्ड की होल्डिंग अप्रैल के बाद 10 फीसदी गिर चुकी है। इस समय यह पिछले ढाई साल के निचले स्तर पर है। इस साल वैश्विक बाजार में सोना करीब 8.5 फीसदी और घरेलू बाजार में 1.5 फीसदी टूट चुका है। सराफा मामलों के जानकार अनिल अग्रवाल कहते हैं कि वैश्विक हालात ऐसे हैं कि इस समय निवेशक डॉलर में पैसा लगा रहे हैं। इस कारण दूसरी मुद्रा और जिंस दबाव में हैं। हालांकि हालात यही रहे तो सोने में तेज सुधार भी देखने को मिलेगा। वैश्विक मंदी की बढ़ती आशंका का सबसे ज्यादा असर चांदी पर पड़ रहा है। वैश्विक बाजार में चांदी के दाम ढाई साल के निचले स्तर पर पहुंच चुके हैं। वैश्विक बाजार में चांदी की कीमत 15 डॉलर प्रति औंस के करीब चल रही है। इसके पहले अप्रैल 2016 में चांदी इस स्तर पर थी। वैश्विक बाजार में चांदी के दाम दो महीने पहले 18.13 डॉलर प्रति औंस पर चल रहे थे। चांदी कारोबारी राहुल मेहता कहते हैं कि सोना और मूल धातु में कमजोरी से चांदी पर दोहरा दबाव पड़ रहा है। वैश्विक बाजार का व्यापार युद्ध अब मुद्रा युद्ध में बदल गया है। वैश्विक अर्थव्यवस्था एक बार फिर मंदी की गिरफ्त में आ सकती है जिसके कारण औद्योगिक विकास भी प्रभावित होगा। यही वजह है कि चांदी की कीमतें लगातार लुढक़ रही हैं क्योंकि मांग में कमी आने की आशंका बढ़ती जा रही है। फिलहाल घरेलू बाजार में चांदी 37,825 रुपये प्रति किलोग्राम पर बिक रही है। तुर्की के आर्थिक संकट से एक बार फिर वैश्विक मंदी का डर है जिसका असर दुनियाभर के बाजारों पर देखा जा सकता है। इसने विश्व स्तर पर निवेशकों के रुख को प्रभावित किया है और वे डॉलर को एक सुरक्षित निवेश के तौर पर देख रहे हैं। आनंद राठी शेयर्स ऐंड स्टाक ब्रोकर्स में शोध विश्लेषक आर मारू ने कहा कि आयातकों की अधिक मांग से रुपये की विनिमय दर में गिरावट आई। उन्होंने कहा, तुर्की संकट को लेकर अनिश्चितता तथा डॉलर सूचकांक में तेजी को देखते हुए आयातक आक्रमक तरीके से डॉलर की लिवाली कर रहे हैं। दूसरी तरफ आरबीआई की तरफ से आक्रमक हस्तक्षेप न होने से भी रुपया नीचे आया। सरकार ने अमेरिकी डॉलर के मुकाबले रुपये के अब तक के न्यूनतम स्तर पर पहुंचने के लिए बाह्य कारकों को जिम्मेदार ठहराते हुए कहा कि इसमें चिंता की कोई बात नहीं है।</s><2hi>
---
MultiIndicHeadlineGeneration is a multilingual, sequence-to-sequence pre-trained model focusing only on Indic languages. It currently supports 11 Indian languages and is finetuned on [IndicBART](https://huggingface.co/ai4bharat/IndicBART) checkpoint. You can use MultiIndicHeadlineGeneration model to build natural language generation applications in Indian languages for tasks like summarization, headline generation and other summarization related tasks. Some salient features of the MultiIndicHeadlineGeneration are:
<ul>
<li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5. </li>
<li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for finetuning and decoding. </li>
<li> Trained on large Indic language corpora (1.316 million paragraphs and 5.9 million unique tokens) . </li>
<li>All languages have been represented in Devanagari script to encourage transfer learning among the related languages.</li>
</ul>
# Usage:
```
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicHeadlineGenerationSS", do_lower_case=False, use_fast=False, keep_accents=True)
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicHeadlineGenerationSS", do_lower_case=False, use_fast=False, keep_accents=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicHeadlineGenerationSS")
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicHeadlineGenerationSS")
# Some initial mapping
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
# To get lang_id use any of ['<2as>', '<2bn>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>']
# First tokenize the input and outputs. The format below is how MultiIndicHeadlineGenerationSS was trained so the input should be "Paragraph </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
inp = tokenizer("यूट्यूब या फेसबुक पर वीडियो देखते समय आप भी बफरिंग की वजह से परेशान होते हैं? इसका जवाब हां है तो जल्द ही आपकी सारी समस्या खत्म होने वाली है। दरअसल, टेलीकॉम मिनिस्टर अश्विनी वैष्णव ने पिछले सप्ताह कहा कि अगस्त के अंत तक हर-हाल में '5G' इंटरनेट लॉन्च हो जाएगा। उन्होंने यह भी कहा है कि स्पेक्ट्रम की बिक्री शुरू हो चुकी है और जून तक ये प्रोसेस खत्म होने की संभावना है।</s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[43615, 116, 4426, 46, . . . . 64001, 64006]])
out = tokenizer("<2hi> 5G इंटरनेट का इंतजार हुआ खत्म:अगस्त तक देश में शुरू हो सकती है 5G सर्विस </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[64006, 393, 1690, . . . . 1690, 11999, 64001]])
model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])
# For loss
model_outputs.loss ## This is not label smoothed.
# For logits
model_outputs.logits
# For generation. Pardon the messiness. Note the decoder_start_token_id.
model.eval() # Set dropouts to zero
model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=32, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
# Decode to get output strings
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # अगस्त के अंत तक शुरू हो जाएगा '5G' इंटरनेट
```
# Note:
If you wish to use any language written in a non-Devanagari script, then you should first convert it to Devanagari using the <a href="https://github.com/anoopkunchukuttan/indic_nlp_library">Indic NLP Library</a>. After you get the output, you should convert it back into the original script.
# Benchmarks
Scores on the `MultiIndicHeadlineGeneration` test sets are as follows:
Language | Rouge-1 / Rouge-2 / Rouge-L
---------|----------------------------
as | 46.06 / 30.02 / 44.64
bn | 34.22 / 19.18 / 32.60
gu | 33.49 / 17.49 / 31.79
hi | 37.14 / 18.04 / 32.70
kn | 64.82 / 53.91 / 64.10
ml | 58.69 / 47.18 / 57.94
mr | 35.20 / 19.50 / 34.08
or | 22.51 / 9.00 / 21.62
pa | 46.47 / 29.07 / 43.25
ta | 47.39 / 31.39 / 45.94
te | 37.69 / 21.89 / 36.66
average | 42.15 / 26.97 / 40.48
# Contributors
<ul>
<li> Aman Kumar </li>
<li> Prachi Sahu </li>
<li> Himani Shrotriya </li>
<li> Raj Dabre </li>
<li> Anoop Kunchukuttan </li>
<li> Ratish Puduppully </li>
<li> Mitesh M. Khapra </li>
<li> Pratyush Kumar </li>
</ul>
# Paper
If you use MultiIndicHeadlineGeneration, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
}
```
|
danjohnvelasco/filipino-sentence-roberta-v1 | b542d6e4e2b37cbb1ce3ecd9d3741afb324e24bf | 2022-04-09T09:45:29.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"tl",
"dataset:newsph_nli",
"arxiv:2204.03251",
"sentence-transformers",
"tagalog",
"filipino",
"license:cc-by-sa-4.0"
] | feature-extraction | false | danjohnvelasco | null | danjohnvelasco/filipino-sentence-roberta-v1 | 30 | 1 | sentence-transformers | 7,185 | ---
language: tl
tags:
- roberta
- tagalog
- filipino
- sentence-transformers
datasets: newsph_nli
license: cc-by-sa-4.0
---
# Filipino Sentence RoBERTa
We finetuned [RoBERTa Tagalog Base (finetuned on COHFIE)](https://huggingface.co/danjohnvelasco/roberta-tagalog-base-cohfie-v1) on [NewsPH-NLI](https://huggingface.co/datasets/newsph_nli) to learn to encode filipino/tagalog sentences to sentence embeddings. We used [sentence-transformers](https://www.SBERT.net) to finetune the model. All model details, training setups, and corpus details can be found in this paper: [Automatic WordNet Construction using Word Sense Induction through Sentence Embeddings](https://arxiv.org/abs/2204.03251).
## Intended uses & limitations
The intended use of this model is to extract sentence embeddings which will be used for clustering. This model may not be safe for use in production since we did not examine it for biases. Please use it with caution.
## How to use
Using this model is easier when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Here is how to use this model to encode sentences to sentence embeddings using `SentenceTransformer`:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("danjohnvelasco/filipino-sentence-roberta-v1")
sentence_list = ["sentence 1", "sentence 2", "sentence 3"]
sentence_embeddings = model.encode(sentence_list)
print(sentence_embeddings)
```
## BibTeX entry and citation info
If you use this model, please cite our work:
```
@misc{https://doi.org/10.48550/arxiv.2204.03251,
doi = {10.48550/ARXIV.2204.03251},
url = {https://arxiv.org/abs/2204.03251},
author = {Velasco, Dan John and Alba, Axel and Pelagio, Trisha Gail and Ramirez, Bryce Anthony and Cruz, Jan Christian Blaise and Cheng, Charibeth},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Automatic WordNet Construction using Word Sense Induction through Sentence Embeddings},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` |
philschmid/roberta-large-finetuned-clinc | 10dab4d778797f61ac2ea488175a6512012a1d90 | 2022-04-14T13:25:42.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | philschmid | null | philschmid/roberta-large-finetuned-clinc | 30 | null | transformers | 7,186 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: roberta-large-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9703225806451613
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-clinc
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2109
- Accuracy: 0.9703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 5.0643 | 1.0 | 120 | 5.0440 | 0.0065 |
| 4.2726 | 2.0 | 240 | 2.7488 | 0.7255 |
| 1.9687 | 3.0 | 360 | 0.8694 | 0.9174 |
| 0.5773 | 4.0 | 480 | 0.3267 | 0.9539 |
| 0.1842 | 5.0 | 600 | 0.2109 | 0.9703 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Intel/roberta-base-mrpc-int8-static | 4a3c353d63adea27b478734ae637aa4f0bc729b8 | 2022-06-10T02:37:58.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:glue",
"transformers",
"text-classfication",
"int8",
"Intel® Neural Compressor",
"PostTrainingStatic",
"license:mit",
"model-index"
] | text-classification | false | Intel | null | Intel/roberta-base-mrpc-int8-static | 30 | null | transformers | 7,187 | ---
language:
- en
license: mit
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingStatic
datasets:
- glue
metrics:
- f1
model-index:
- name: roberta-base-mrpc-int8-static
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: F1
type: f1
value: 0.924693520140105
---
# INT8 roberta-base-mrpc
### Post-training static quantization
This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [roberta-base-mrpc](https://huggingface.co/Intel/roberta-base-mrpc).
The calibration dataloader is the train dataloader. The default calibration sampling size 300 isn't divisible exactly by batch size 8, so the real sampling size is 304.
The embedding module **roberta.embeddings.token_type_embeddings** falls back to fp32 due to *RuntimeError('Expect weight, indices, and offsets to be contiguous.')*
### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.9247|0.9138|
| **Model size (MB)** |121|476|
### Load with Intel® Neural Compressor:
```python
from neural_compressor.utils.load_huggingface import OptimizedModel
int8_model = OptimizedModel.from_pretrained(
'Intel/roberta-base-mrpc-int8-static',
)
```
|
Hate-speech-CNERG/hindi-abusive-MuRIL | a551d03550a8434fb2b3c701fbcfe547f83b7d9b | 2022-05-03T08:51:13.000Z | [
"pytorch",
"bert",
"text-classification",
"hi",
"arxiv:2204.12543",
"transformers",
"license:afl-3.0"
] | text-classification | false | Hate-speech-CNERG | null | Hate-speech-CNERG/hindi-abusive-MuRIL | 30 | null | transformers | 7,188 | ---
language: [hi]
license: afl-3.0
---
This model is used detecting **abusive speech** in **Devanagari Hindi**. It is finetuned on MuRIL model using Hindi abusive speech dataset.
The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive)
LABEL_0 :-> Normal
LABEL_1 :-> Abusive
### For more details about our paper
Mithun Das, Somnath Banerjee and Animesh Mukherjee. "[Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages](https://arxiv.org/abs/2204.12543)". Accepted at ACM HT 2022.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{das2022data,
title={Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages},
author={Das, Mithun and Banerjee, Somnath and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2204.12543},
year={2022}
}
~~~ |
mikeadimech/bart-qmsum-meeting-summarization | e420b1b303d0fe66761c147a1145f6f76a393e53 | 2022-05-25T16:14:18.000Z | [
"pytorch",
"bart",
"text2text-generation",
"dataset:yawnick/QMSum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | mikeadimech | null | mikeadimech/bart-qmsum-meeting-summarization | 30 | 1 | transformers | 7,189 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-qmsum-meeting-summarization
results: []
datasets:
- yawnick/QMSum
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-qmsum-meeting-summarization
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on the QMSum dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3354
- Rouge1: 39.5539
- Rouge2: 12.1134
- Rougel: 23.9163
- Rougelsum: 36.0299
- Gen Len: 117.225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 200
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 5.5573 | 2.17 | 100 | 5.4074 | 23.6282 | 4.1122 | 14.584 | 21.2263 | 84.75 |
| 5.4721 | 4.35 | 200 | 5.2899 | 24.61 | 4.272 | 15.2096 | 22.2997 | 87.2 |
| 5.3407 | 6.52 | 300 | 5.1360 | 25.8272 | 4.3314 | 15.9926 | 23.3416 | 87.95 |
| 5.1527 | 8.7 | 400 | 4.9751 | 27.7207 | 5.31 | 16.7055 | 24.8357 | 88.35 |
| 5.0058 | 10.87 | 500 | 4.8372 | 30.1847 | 6.8615 | 18.934 | 27.2424 | 89.95 |
| 4.8807 | 13.04 | 600 | 4.7488 | 33.1208 | 9.1784 | 20.655 | 30.1198 | 101.3 |
| 4.7931 | 15.22 | 700 | 4.6891 | 33.2266 | 8.4253 | 20.0334 | 30.4093 | 108.925 |
| 4.7272 | 17.39 | 800 | 4.6467 | 35.0475 | 9.326 | 21.0655 | 31.8413 | 111.7 |
| 4.6904 | 19.57 | 900 | 4.6102 | 34.869 | 9.6046 | 21.395 | 32.4346 | 115.05 |
| 4.6547 | 21.74 | 1000 | 4.5829 | 36.3392 | 10.9936 | 22.1524 | 33.6863 | 119.875 |
| 4.594 | 23.91 | 1100 | 4.5602 | 35.9717 | 10.3827 | 21.6118 | 32.8302 | 119.5 |
| 4.5714 | 26.09 | 1200 | 4.5424 | 36.3656 | 10.6282 | 22.2187 | 33.6494 | 118.0 |
| 4.542 | 28.26 | 1300 | 4.5256 | 36.7386 | 10.615 | 22.2487 | 34.1927 | 115.675 |
| 4.5092 | 30.43 | 1400 | 4.5116 | 37.1597 | 10.7751 | 22.6747 | 34.396 | 118.55 |
| 4.5031 | 32.61 | 1500 | 4.4981 | 37.6108 | 10.9732 | 22.8342 | 34.6833 | 117.125 |
| 4.4682 | 34.78 | 1600 | 4.4875 | 37.5057 | 11.1328 | 22.8973 | 34.7114 | 117.65 |
| 4.4387 | 36.96 | 1700 | 4.4775 | 38.1278 | 11.3597 | 23.1307 | 35.1869 | 115.65 |
| 4.4085 | 39.13 | 1800 | 4.4682 | 37.9578 | 11.4355 | 23.1149 | 35.4961 | 119.6 |
| 4.4166 | 41.3 | 1900 | 4.4592 | 38.1467 | 11.3208 | 23.045 | 35.0824 | 120.05 |
| 4.3971 | 43.48 | 2000 | 4.4517 | 37.9922 | 11.5071 | 23.3983 | 34.6918 | 114.425 |
| 4.3638 | 45.65 | 2100 | 4.4438 | 38.1666 | 11.4985 | 23.5518 | 35.1484 | 117.2 |
| 4.3522 | 47.83 | 2200 | 4.4377 | 37.7572 | 11.3984 | 23.4437 | 35.0453 | 113.725 |
| 4.3398 | 50.0 | 2300 | 4.4320 | 38.5833 | 11.4575 | 23.6411 | 35.3437 | 116.125 |
| 4.3341 | 52.17 | 2400 | 4.4247 | 38.2705 | 12.0374 | 23.5807 | 34.9985 | 110.8 |
| 4.3024 | 54.35 | 2500 | 4.4201 | 39.0206 | 12.2041 | 23.4394 | 35.6291 | 114.5 |
| 4.3117 | 56.52 | 2600 | 4.4147 | 38.6555 | 12.1079 | 23.5655 | 35.5287 | 111.325 |
| 4.2659 | 58.7 | 2700 | 4.4107 | 39.2235 | 12.025 | 23.934 | 36.2243 | 113.3 |
| 4.2946 | 60.87 | 2800 | 4.4055 | 39.0301 | 12.1833 | 23.8999 | 36.0487 | 110.325 |
| 4.2431 | 63.04 | 2900 | 4.4009 | 39.0498 | 12.3215 | 23.9686 | 36.0277 | 112.775 |
| 4.2439 | 65.22 | 3000 | 4.3968 | 38.8786 | 12.0985 | 23.8308 | 35.8575 | 115.175 |
| 4.2244 | 67.39 | 3100 | 4.3922 | 38.7614 | 12.1721 | 23.7736 | 35.6744 | 113.55 |
| 4.235 | 69.57 | 3200 | 4.3895 | 38.6858 | 11.3994 | 23.6392 | 35.3456 | 114.125 |
| 4.2064 | 71.74 | 3300 | 4.3859 | 39.0258 | 12.0435 | 24.2528 | 35.8378 | 113.5 |
| 4.1934 | 73.91 | 3400 | 4.3835 | 39.0467 | 11.5556 | 23.6704 | 35.5643 | 111.5 |
| 4.1859 | 76.09 | 3500 | 4.3800 | 38.776 | 11.729 | 24.1254 | 35.3894 | 112.9 |
| 4.1762 | 78.26 | 3600 | 4.3775 | 38.9465 | 11.9112 | 23.8123 | 35.5453 | 114.125 |
| 4.1848 | 80.43 | 3700 | 4.3744 | 39.2783 | 11.6539 | 23.8236 | 35.8465 | 110.225 |
| 4.1386 | 82.61 | 3800 | 4.3730 | 38.8894 | 11.4784 | 23.7534 | 35.5464 | 113.15 |
| 4.1483 | 84.78 | 3900 | 4.3710 | 39.2734 | 12.0285 | 23.8171 | 35.6884 | 115.95 |
| 4.1428 | 86.96 | 4000 | 4.3688 | 39.6134 | 12.0616 | 23.7454 | 36.0363 | 113.375 |
| 4.133 | 89.13 | 4100 | 4.3663 | 38.935 | 11.4781 | 23.8766 | 35.4061 | 114.15 |
| 4.1211 | 91.3 | 4200 | 4.3648 | 39.1488 | 11.8399 | 23.9935 | 35.3107 | 113.975 |
| 4.1076 | 93.48 | 4300 | 4.3650 | 38.9764 | 11.9963 | 23.4994 | 35.7214 | 116.25 |
| 4.121 | 95.65 | 4400 | 4.3597 | 38.9418 | 11.8416 | 24.0272 | 35.6597 | 111.325 |
| 4.0936 | 97.83 | 4500 | 4.3602 | 39.266 | 12.5616 | 24.2046 | 36.1883 | 114.275 |
| 4.0841 | 100.0 | 4600 | 4.3588 | 39.4659 | 12.2132 | 24.0521 | 36.249 | 115.475 |
| 4.0768 | 102.17 | 4700 | 4.3578 | 39.4167 | 12.0587 | 24.025 | 35.9668 | 114.375 |
| 4.0711 | 104.35 | 4800 | 4.3541 | 39.6943 | 12.1095 | 24.0925 | 36.3496 | 115.65 |
| 4.072 | 106.52 | 4900 | 4.3539 | 40.2024 | 12.4618 | 24.2863 | 36.8844 | 113.475 |
| 4.0646 | 108.7 | 5000 | 4.3540 | 39.4299 | 11.8085 | 23.686 | 36.0454 | 113.975 |
| 4.0508 | 110.87 | 5100 | 4.3517 | 39.9217 | 11.9379 | 24.2299 | 36.6362 | 115.5 |
| 4.0549 | 113.04 | 5200 | 4.3498 | 40.3496 | 12.2558 | 24.0271 | 36.9715 | 112.5 |
| 4.0428 | 115.22 | 5300 | 4.3497 | 40.1349 | 12.0628 | 24.0622 | 36.9169 | 113.95 |
| 4.0391 | 117.39 | 5400 | 4.3480 | 40.1209 | 12.3587 | 24.3456 | 36.8411 | 116.025 |
| 4.0195 | 119.57 | 5500 | 4.3474 | 39.5209 | 12.1325 | 24.2622 | 36.4357 | 111.975 |
| 4.0054 | 121.74 | 5600 | 4.3468 | 40.2885 | 12.4453 | 24.2373 | 36.932 | 117.375 |
| 4.0286 | 123.91 | 5700 | 4.3465 | 39.3943 | 11.8399 | 23.9786 | 35.991 | 116.475 |
| 4.005 | 126.09 | 5800 | 4.3459 | 38.7442 | 11.7408 | 23.8948 | 35.3673 | 117.625 |
| 3.991 | 128.26 | 5900 | 4.3444 | 39.6276 | 12.1549 | 23.9542 | 36.3832 | 115.675 |
| 4.0137 | 130.43 | 6000 | 4.3427 | 39.8331 | 12.2687 | 24.187 | 36.6144 | 115.475 |
| 3.9755 | 132.61 | 6100 | 4.3438 | 39.1907 | 12.1033 | 24.2339 | 35.9126 | 114.525 |
| 4.0134 | 134.78 | 6200 | 4.3422 | 39.4298 | 11.862 | 24.0847 | 35.5744 | 115.025 |
| 3.9935 | 136.96 | 6300 | 4.3416 | 39.4158 | 11.6968 | 23.9636 | 35.8155 | 114.35 |
| 3.9606 | 139.13 | 6400 | 4.3409 | 39.1239 | 11.7046 | 23.6846 | 36.0431 | 114.775 |
| 3.9834 | 141.3 | 6500 | 4.3404 | 39.6375 | 12.2746 | 24.2636 | 36.1425 | 116.175 |
| 3.9687 | 143.48 | 6600 | 4.3409 | 39.1494 | 12.1404 | 24.0778 | 35.4932 | 118.05 |
| 3.9861 | 145.65 | 6700 | 4.3394 | 39.6258 | 12.2497 | 23.9662 | 36.4054 | 116.8 |
| 3.9755 | 147.83 | 6800 | 4.3400 | 39.3121 | 11.7831 | 23.6584 | 35.9636 | 118.125 |
| 3.9591 | 150.0 | 6900 | 4.3390 | 39.6957 | 11.9406 | 24.0599 | 36.3021 | 114.9 |
| 3.9599 | 152.17 | 7000 | 4.3389 | 39.4271 | 11.4159 | 24.1437 | 35.9056 | 115.8 |
| 3.9456 | 154.35 | 7100 | 4.3384 | 39.4862 | 11.726 | 23.883 | 35.9839 | 116.375 |
| 3.9341 | 156.52 | 7200 | 4.3386 | 39.6915 | 11.8028 | 24.346 | 36.406 | 116.425 |
| 3.9648 | 158.7 | 7300 | 4.3383 | 39.9311 | 11.7135 | 23.985 | 36.2617 | 118.075 |
| 3.9486 | 160.87 | 7400 | 4.3372 | 39.8375 | 12.0014 | 24.0969 | 36.5902 | 118.8 |
| 3.9533 | 163.04 | 7500 | 4.3371 | 40.2678 | 12.3137 | 24.1916 | 37.1632 | 118.075 |
| 3.9344 | 165.22 | 7600 | 4.3369 | 39.5588 | 11.6805 | 24.1474 | 36.2021 | 114.875 |
| 3.9314 | 167.39 | 7700 | 4.3368 | 39.8649 | 11.9824 | 24.5459 | 36.3921 | 113.65 |
| 3.9558 | 169.57 | 7800 | 4.3363 | 39.8428 | 12.0892 | 24.0175 | 36.67 | 112.7 |
| 3.928 | 171.74 | 7900 | 4.3364 | 39.2281 | 11.8456 | 23.7212 | 36.2005 | 113.95 |
| 3.9351 | 173.91 | 8000 | 4.3363 | 39.9798 | 12.4387 | 23.7687 | 36.6472 | 115.45 |
| 3.9326 | 176.09 | 8100 | 4.3363 | 39.9772 | 12.1193 | 24.1518 | 36.5791 | 117.4 |
| 3.9387 | 178.26 | 8200 | 4.3363 | 39.8629 | 12.1719 | 23.9446 | 36.345 | 115.075 |
| 3.9204 | 180.43 | 8300 | 4.3358 | 39.9738 | 12.3072 | 23.8641 | 36.4802 | 116.3 |
| 3.9418 | 182.61 | 8400 | 4.3357 | 40.1451 | 12.4144 | 24.1553 | 36.4251 | 116.025 |
| 3.9289 | 184.78 | 8500 | 4.3357 | 39.7241 | 12.0543 | 24.0752 | 36.0847 | 115.8 |
| 3.9176 | 186.96 | 8600 | 4.3358 | 39.7969 | 12.0967 | 24.123 | 36.2664 | 118.6 |
| 3.9097 | 189.13 | 8700 | 4.3356 | 39.4096 | 11.9872 | 24.0609 | 35.8662 | 117.2 |
| 3.938 | 191.3 | 8800 | 4.3354 | 39.4695 | 11.9343 | 24.0295 | 35.9372 | 117.025 |
| 3.9239 | 193.48 | 8900 | 4.3352 | 39.3231 | 12.0965 | 23.9131 | 35.9555 | 117.275 |
| 3.91 | 195.65 | 9000 | 4.3354 | 39.5932 | 12.1808 | 23.9233 | 36.0864 | 116.925 |
| 3.9234 | 197.83 | 9100 | 4.3354 | 39.5539 | 12.1134 | 23.9163 | 36.0299 | 117.225 |
| 3.9263 | 200.0 | 9200 | 4.3354 | 39.5539 | 12.1134 | 23.9163 | 36.0299 | 117.225 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion | 0706687fe4be0de2676a3454d8ca4d3abf440258 | 2022-07-20T15:35:20.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Abdelrahman-Rezk | null | Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion | 30 | null | transformers | 7,190 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8885
- name: F1
type: f1
value: 0.8818845305609924
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.892
verified: true
- name: Precision Macro
type: precision
value: 0.8923475194643138
verified: true
- name: Precision Micro
type: precision
value: 0.892
verified: true
- name: Precision Weighted
type: precision
value: 0.894495118514709
verified: true
- name: Recall Macro
type: recall
value: 0.768240931585822
verified: true
- name: Recall Micro
type: recall
value: 0.892
verified: true
- name: Recall Weighted
type: recall
value: 0.892
verified: true
- name: F1 Macro
type: f1
value: 0.7897026729904524
verified: true
- name: F1 Micro
type: f1
value: 0.892
verified: true
- name: F1 Weighted
type: f1
value: 0.8842367889371163
verified: true
- name: loss
type: loss
value: 0.34626322984695435
verified: true
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: validation
metrics:
- name: Accuracy
type: accuracy
value: 0.8885
verified: true
- name: Precision Macro
type: precision
value: 0.8849064522901132
verified: true
- name: Precision Micro
type: precision
value: 0.8885
verified: true
- name: Precision Weighted
type: precision
value: 0.8922726271705158
verified: true
- name: Recall Macro
type: recall
value: 0.7854833401719518
verified: true
- name: Recall Micro
type: recall
value: 0.8885
verified: true
- name: Recall Weighted
type: recall
value: 0.8885
verified: true
- name: F1 Macro
type: f1
value: 0.8031492596189961
verified: true
- name: F1 Micro
type: f1
value: 0.8885
verified: true
- name: F1 Weighted
type: f1
value: 0.8818845305609924
verified: true
- name: loss
type: loss
value: 0.36373236775398254
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3663
- Accuracy: 0.8885
- F1: 0.8819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.5574 | 0.822 | 0.7956 |
| 0.7483 | 2.0 | 250 | 0.3663 | 0.8885 | 0.8819 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BK-V/xlm-roberta-base-finetuned-arman-fa | 2e0c4b6bf0cd329c801ee95fd3ab8eadf5d2a73e | 2022-06-30T13:40:40.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | BK-V | null | BK-V/xlm-roberta-base-finetuned-arman-fa | 30 | null | transformers | 7,191 | ---
license: mit
tags:
- generated_from_trainer
- token-classification
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-arman-fa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-arman-fa
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0077
- F1: 0.9855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1054 | 1.0 | 2305 | 0.0497 | 0.8548 |
| 0.0419 | 2.0 | 4610 | 0.0339 | 0.8834 |
| 0.0185 | 3.0 | 6915 | 0.0159 | 0.9626 |
| 0.0068 | 4.0 | 9220 | 0.0103 | 0.9789 |
| 0.0025 | 5.0 | 11525 | 0.0077 | 0.9855 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.1
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Dafa/factcc | 733ef086ec54ec702dd952a00c3368fb8a63d199 | 2022-05-25T23:38:09.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:afl-3.0"
] | text-classification | false | Dafa | null | Dafa/factcc | 30 | null | transformers | 7,192 | ---
license: afl-3.0
---
|
Aktsvigun/bert-base-aeslc | 7d867d676594c398c57ff41ac7e53a470f35ad49 | 2022-05-27T20:45:49.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Aktsvigun | null | Aktsvigun/bert-base-aeslc | 30 | null | transformers | 7,193 | ---
license: apache-2.0
---
|
obokkkk/kc-bert_finetuned_unsmile | e5bbe6489ac56d2b4e18975efef8fa5a98a0edf4 | 2022-06-12T17:22:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | obokkkk | null | obokkkk/kc-bert_finetuned_unsmile | 30 | null | transformers | 7,194 | ---
tags:
- generated_from_trainer
model-index:
- name: kc-bert_finetuned_unsmile
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kc-bert_finetuned_unsmile
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1326
- Lrap: 0.8753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Lrap |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 235 | 0.1458 | 0.8612 |
| No log | 2.0 | 470 | 0.1280 | 0.8738 |
| 0.1685 | 3.0 | 705 | 0.1257 | 0.8791 |
| 0.1685 | 4.0 | 940 | 0.1281 | 0.8777 |
| 0.0774 | 5.0 | 1175 | 0.1326 | 0.8753 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.12.1
|
microsoft/markuplm-base-finetuned-websrc | 964cf4517792512bfdb1818d767d6799ebe5c06b | 2022-06-14T13:29:20.000Z | [
"pytorch",
"markuplm",
"question-answering",
"arxiv:2110.08518",
"transformers",
"autotrain_compatible"
] | question-answering | false | microsoft | null | microsoft/markuplm-base-finetuned-websrc | 30 | null | transformers | 7,195 | # MarkupLM
**Multimodal (text +markup language) pre-training for [Document AI](https://www.microsoft.com/en-us/research/project/document-ai/)**
## Introduction
MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:
[MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei
|
AI-Prize-Challenges/autotrain-finetuned1-1035435583 | 83c9b6604643f236dea486fb6ba1629edc5b9ec5 | 2022-06-24T23:26:04.000Z | [
"pytorch",
"albert",
"text-classification",
"zh",
"dataset:AI-Prize-Challenges/autotrain-data-finetuned1",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | AI-Prize-Challenges | null | AI-Prize-Challenges/autotrain-finetuned1-1035435583 | 30 | null | transformers | 7,196 | ---
tags: autotrain
language: zh
widget:
- text: "I love AutoTrain 🤗"
datasets:
- AI-Prize-Challenges/autotrain-data-finetuned1
co2_eq_emissions: 0.03608660562919794
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1035435583
- CO2 Emissions (in grams): 0.03608660562919794
## Validation Metrics
- Loss: 0.31551286578178406
- Accuracy: 0.8816629547141797
- Precision: 0.8965702036441586
- Recall: 0.8906042054830983
- AUC: 0.9449180200540812
- F1: 0.8935772466283884
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/AI-Prize-Challenges/autotrain-finetuned1-1035435583
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("AI-Prize-Challenges/autotrain-finetuned1-1035435583", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("AI-Prize-Challenges/autotrain-finetuned1-1035435583", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
itzo/bert-base-uncased-fine-tuned-on-clinc_oos-dataset | c72afcfccbe74cb7d6cc1251adcc58c5b7279bce | 2022-07-04T11:05:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | itzo | null | itzo/bert-base-uncased-fine-tuned-on-clinc_oos-dataset | 30 | null | transformers | 7,197 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
model-index:
- name: bert-base-uncased-fine-tuned-on-clinc_oos-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-fine-tuned-on-clinc_oos-dataset
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2811
- Accuracy Score: 0.9239
- F1 Score: 0.9213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Score | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:--------:|
| 4.4271 | 1.0 | 239 | 3.5773 | 0.6116 | 0.5732 |
| 3.0415 | 2.0 | 478 | 2.4076 | 0.8390 | 0.8241 |
| 2.1182 | 3.0 | 717 | 1.7324 | 0.8994 | 0.8934 |
| 1.5897 | 4.0 | 956 | 1.3863 | 0.9210 | 0.9171 |
| 1.3458 | 5.0 | 1195 | 1.2811 | 0.9239 | 0.9213 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dddb/title_generator | b53b1ed6f2847cd37d474efbd1c7e680546e4102 | 2022-06-30T13:27:17.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"unk",
"dataset:dddb/autotrain-data-mt5_chinese_small_finetune",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | dddb | null | dddb/title_generator | 30 | null | transformers | 7,198 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- dddb/autotrain-data-mt5_chinese_small_finetune
co2_eq_emissions: 0.2263611804615655
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1060836848
- CO2 Emissions (in grams): 0.2263611804615655
## Validation Metrics
- Loss: 2.3939340114593506
- Rouge1: 0.3375
- Rouge2: 0.0
- RougeL: 0.3375
- RougeLsum: 0.3375
- Gen Len: 11.4395
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/dddb/autotrain-mt5_chinese_small_finetune-1060836848
``` |
abhishek-shrm/roberta-base-finetuned-beer-ner | 2923751dec0bc2a87772a53d48ac95c0c63897fc | 2022-07-03T09:27:51.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | abhishek-shrm | null | abhishek-shrm/roberta-base-finetuned-beer-ner | 30 | null | transformers | 7,199 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.