modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Zaib/Vulnerability-detection | 429d6167e1c00b8490310d27352aac652daba00e | 2022-07-16T11:03:58.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | Zaib | null | Zaib/Vulnerability-detection | 28 | null | transformers | 7,400 | ---
tags:
- generated_from_trainer
model-index:
- name: Vulnerability-detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Vulnerability-detection
This model is a fine-tuned version of [mrm8488/codebert-base-finetuned-detect-insecure-code](https://huggingface.co/mrm8488/codebert-base-finetuned-detect-insecure-code) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
tokeron/alephbert-finetuned-metaphor-detection | 9aeee9b43ff977a9d131d2609bbda881205cab0a | 2022-07-20T09:21:13.000Z | [
"pytorch",
"bert",
"token-classification",
"he",
"dataset:Piyutim",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | token-classification | false | tokeron | null | tokeron/alephbert-finetuned-metaphor-detection | 28 | null | transformers | 7,401 | ---
license: afl-3.0
language:
- he
tags:
- token-classification
datasets:
- Piyutim
model:
- onlplab/alephbert-base
metrics:
- f1
widget:
- text: "נשבר לי הגב"
example_title: "Broken back"
- text: "ש לו לב זהב"
example_title: "Golden heart"
---
This is a token-classification model.
This model is AlephBert fine-tuned on detecting metaphors from Hebrew Piyutim
model-index:
- name: tokeron/alephbert-finetuned-metaphor-detection
results: []
# model
This model fine-tunes onlplab/alephbert-base model on Piyutim dataset.
### About Us
Created by Michael Toker in collaboration with Yonatan Belinkov, Benny Kornfeld, Oren Mishali, and Ophir Münz-Manor.
For more cooperation, please contact email:
[email protected]
|
Be-Lo/xtremedistil-l6-h256-uncased-natural-questions-short | 89eaf5c1247e3b1d007ab9053175f795ab468bcf | 2022-07-22T17:23:04.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"transformers",
"natural-questions-short",
"license:mit",
"autotrain_compatible"
] | question-answering | false | Be-Lo | null | Be-Lo/xtremedistil-l6-h256-uncased-natural-questions-short | 28 | null | transformers | 7,402 | ---
language: en
tags:
- natural-questions-short
- question-answering
license: mit
---
# xtremedistil-l6-h256-uncased for QA
This is a [xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) model, fine-tuned using the [NaturalQuestionsShort](https://research.google/pubs/pub47761/) dataset from the [MRQA Shared Task 2019](https://github.com/mrqa/MRQA-Shared-Task-2019) repository.
## Overview
**Language model:** xtremedistil-l6-h256-uncased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** NaturalQuestionsShort
**Eval data:** NaturalQuestionsShort
**Infrastructure**: Google Colaboratory GPU
## Hyperparameters
```
batch_size = 16
n_epochs = 2
base_LM_model = "xtremedistil-l6-h256-uncased"
max_seq_len = 512
learning_rate = 3e-5
optimizer = AdamW
weight_decay = 0.01
lr_schedule = Linear
warmup_steps = 0
```
## Performance
The model was evaluated on the on the [NaturalQuestionsShort](https://research.google/pubs/pub47761/) dev set from the [MRQA Shared Task 2019](https://github.com/mrqa/MRQA-Shared-Task-2019) repository.
```
"exact_match": 46.914926768463694,
"f1": 63.863619507647456,
```
## UKP Square
This model can also be found on [UKP Square](https://square.ukp-lab.de/qa). This website from the [UKP lab at the TU Darmstadt](https://www.informatik.tu-darmstadt.de/ukp/ukp_home/index.en.jsp) is a platform to compare and evaluate cloud-hosted QA models via explainability techniques and behavioral tests.
## Author & Background
This model was created by Janik and Ben during the [DL4NLP course](https://github.com/dl4nlp-tuda/deep-learning-for-nlp-lectures) by [Ivan Habernal](https://www.trusthlt.org/)
|
anonchickenlegs/sartoshi-bot | 19b5727d5b7f4399fe4997a9797c3b7125504350 | 2022-07-23T02:20:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | anonchickenlegs | null | anonchickenlegs/sartoshi-bot | 28 | null | transformers | 7,403 | ---
tags:
- conversational
---
|
sudo-s/modeversion2_m7_e8 | ef4c745f10e424c2ad13ce3280cc0d1d2cac0469 | 2022-07-24T19:34:08.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | sudo-s | null | sudo-s/modeversion2_m7_e8 | 28 | null | transformers | 7,404 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: modeversion2_m7_e8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modeversion2_m7_e8
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem7 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1060
- Accuracy: 0.9761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 4.0231 | 0.06 | 100 | 3.8568 | 0.1883 |
| 3.3863 | 0.12 | 200 | 3.2510 | 0.2596 |
| 2.6187 | 0.18 | 300 | 2.6243 | 0.3882 |
| 2.3097 | 0.23 | 400 | 2.2189 | 0.4527 |
| 1.9016 | 0.29 | 500 | 1.9495 | 0.5244 |
| 1.7478 | 0.35 | 600 | 1.6609 | 0.6091 |
| 1.2345 | 0.41 | 700 | 1.4335 | 0.6426 |
| 1.4129 | 0.47 | 800 | 1.3001 | 0.6752 |
| 1.1722 | 0.53 | 900 | 1.2030 | 0.6785 |
| 1.0808 | 0.59 | 1000 | 1.0051 | 0.7273 |
| 0.8814 | 0.64 | 1100 | 1.0715 | 0.7063 |
| 0.9831 | 0.7 | 1200 | 0.9283 | 0.7334 |
| 0.8118 | 0.76 | 1300 | 0.8525 | 0.7631 |
| 0.7203 | 0.82 | 1400 | 0.7849 | 0.7756 |
| 0.8881 | 0.88 | 1500 | 0.8786 | 0.7487 |
| 0.6407 | 0.94 | 1600 | 0.6896 | 0.8000 |
| 0.7574 | 1.0 | 1700 | 0.7314 | 0.7754 |
| 0.6063 | 1.06 | 1800 | 0.6312 | 0.8068 |
| 0.4797 | 1.11 | 1900 | 0.5792 | 0.8296 |
| 0.4973 | 1.17 | 2000 | 0.5846 | 0.8221 |
| 0.4432 | 1.23 | 2100 | 0.7057 | 0.7905 |
| 0.5518 | 1.29 | 2200 | 0.5621 | 0.8304 |
| 0.3256 | 1.35 | 2300 | 0.5890 | 0.8143 |
| 0.4284 | 1.41 | 2400 | 0.5204 | 0.8485 |
| 0.3702 | 1.47 | 2500 | 0.5699 | 0.8256 |
| 0.2858 | 1.52 | 2600 | 0.5815 | 0.8287 |
| 0.3706 | 1.58 | 2700 | 0.4615 | 0.8571 |
| 0.3484 | 1.64 | 2800 | 0.4812 | 0.8518 |
| 0.2865 | 1.7 | 2900 | 0.4285 | 0.8638 |
| 0.4474 | 1.76 | 3000 | 0.5217 | 0.8377 |
| 0.2101 | 1.82 | 3100 | 0.4478 | 0.8589 |
| 0.3545 | 1.88 | 3200 | 0.4444 | 0.8612 |
| 0.2728 | 1.93 | 3300 | 0.4213 | 0.8645 |
| 0.3525 | 1.99 | 3400 | 0.3551 | 0.8848 |
| 0.0936 | 2.05 | 3500 | 0.4074 | 0.8748 |
| 0.2118 | 2.11 | 3600 | 0.4089 | 0.8812 |
| 0.2744 | 2.17 | 3700 | 0.3534 | 0.8894 |
| 0.211 | 2.23 | 3800 | 0.4422 | 0.8599 |
| 0.1684 | 2.29 | 3900 | 0.3705 | 0.8858 |
| 0.1885 | 2.34 | 4000 | 0.3651 | 0.8862 |
| 0.249 | 2.4 | 4100 | 0.4234 | 0.8687 |
| 0.1485 | 2.46 | 4200 | 0.3784 | 0.8798 |
| 0.1188 | 2.52 | 4300 | 0.3589 | 0.8873 |
| 0.1274 | 2.58 | 4400 | 0.3570 | 0.8917 |
| 0.2206 | 2.64 | 4500 | 0.3377 | 0.8920 |
| 0.1287 | 2.7 | 4600 | 0.3170 | 0.9023 |
| 0.1805 | 2.75 | 4700 | 0.3469 | 0.8934 |
| 0.1505 | 2.81 | 4800 | 0.4258 | 0.8757 |
| 0.1592 | 2.87 | 4900 | 0.3415 | 0.8948 |
| 0.1297 | 2.93 | 5000 | 0.3168 | 0.9028 |
| 0.1284 | 2.99 | 5100 | 0.3060 | 0.9089 |
| 0.0833 | 3.05 | 5200 | 0.2610 | 0.9207 |
| 0.0334 | 3.11 | 5300 | 0.2766 | 0.9197 |
| 0.0847 | 3.17 | 5400 | 0.3366 | 0.9016 |
| 0.1112 | 3.22 | 5500 | 0.3098 | 0.9079 |
| 0.0477 | 3.28 | 5600 | 0.3385 | 0.9041 |
| 0.0419 | 3.34 | 5700 | 0.2944 | 0.9139 |
| 0.0827 | 3.4 | 5800 | 0.2715 | 0.9239 |
| 0.0659 | 3.46 | 5900 | 0.2695 | 0.9230 |
| 0.0244 | 3.52 | 6000 | 0.3050 | 0.9147 |
| 0.0883 | 3.58 | 6100 | 0.2862 | 0.9203 |
| 0.0527 | 3.63 | 6200 | 0.2383 | 0.9319 |
| 0.0828 | 3.69 | 6300 | 0.2984 | 0.9182 |
| 0.0678 | 3.75 | 6400 | 0.2135 | 0.9436 |
| 0.0492 | 3.81 | 6500 | 0.2605 | 0.9296 |
| 0.0374 | 3.87 | 6600 | 0.2192 | 0.9380 |
| 0.1846 | 3.93 | 6700 | 0.2804 | 0.9187 |
| 0.0557 | 3.99 | 6800 | 0.2599 | 0.9253 |
| 0.0127 | 4.04 | 6900 | 0.2412 | 0.9336 |
| 0.0203 | 4.1 | 7000 | 0.2214 | 0.9415 |
| 0.0272 | 4.16 | 7100 | 0.2322 | 0.9356 |
| 0.066 | 4.22 | 7200 | 0.2643 | 0.9325 |
| 0.0628 | 4.28 | 7300 | 0.2170 | 0.9406 |
| 0.0108 | 4.34 | 7400 | 0.2388 | 0.9405 |
| 0.026 | 4.4 | 7500 | 0.2533 | 0.9372 |
| 0.0401 | 4.45 | 7600 | 0.2407 | 0.9358 |
| 0.0493 | 4.51 | 7700 | 0.2213 | 0.9415 |
| 0.0951 | 4.57 | 7800 | 0.3016 | 0.9237 |
| 0.0017 | 4.63 | 7900 | 0.2183 | 0.9448 |
| 0.0561 | 4.69 | 8000 | 0.1962 | 0.9492 |
| 0.0063 | 4.75 | 8100 | 0.1868 | 0.9522 |
| 0.0054 | 4.81 | 8200 | 0.2068 | 0.9459 |
| 0.0519 | 4.87 | 8300 | 0.2141 | 0.9429 |
| 0.027 | 4.92 | 8400 | 0.2138 | 0.9438 |
| 0.0034 | 4.98 | 8500 | 0.1774 | 0.9529 |
| 0.0096 | 5.04 | 8600 | 0.1778 | 0.9512 |
| 0.0011 | 5.1 | 8700 | 0.1854 | 0.9512 |
| 0.0195 | 5.16 | 8800 | 0.1914 | 0.9483 |
| 0.0245 | 5.22 | 8900 | 0.2156 | 0.9471 |
| 0.0055 | 5.28 | 9000 | 0.1640 | 0.9574 |
| 0.0166 | 5.33 | 9100 | 0.1770 | 0.9568 |
| 0.0217 | 5.39 | 9200 | 0.2011 | 0.9479 |
| 0.0017 | 5.45 | 9300 | 0.2210 | 0.9462 |
| 0.0161 | 5.51 | 9400 | 0.1510 | 0.9621 |
| 0.0193 | 5.57 | 9500 | 0.1643 | 0.9586 |
| 0.0121 | 5.63 | 9600 | 0.1716 | 0.9535 |
| 0.0146 | 5.69 | 9700 | 0.1720 | 0.9554 |
| 0.0071 | 5.74 | 9800 | 0.1831 | 0.9541 |
| 0.0018 | 5.8 | 9900 | 0.2076 | 0.9485 |
| 0.0007 | 5.86 | 10000 | 0.1636 | 0.9599 |
| 0.0005 | 5.92 | 10100 | 0.1625 | 0.9602 |
| 0.0277 | 5.98 | 10200 | 0.1874 | 0.9546 |
| 0.0005 | 6.04 | 10300 | 0.1790 | 0.9579 |
| 0.0012 | 6.1 | 10400 | 0.1840 | 0.9544 |
| 0.0431 | 6.15 | 10500 | 0.1571 | 0.9628 |
| 0.0332 | 6.21 | 10600 | 0.1599 | 0.9591 |
| 0.0014 | 6.27 | 10700 | 0.1493 | 0.9632 |
| 0.0014 | 6.33 | 10800 | 0.1366 | 0.9661 |
| 0.0006 | 6.39 | 10900 | 0.1582 | 0.9609 |
| 0.0005 | 6.45 | 11000 | 0.1704 | 0.9589 |
| 0.0004 | 6.51 | 11100 | 0.1376 | 0.9671 |
| 0.0755 | 6.57 | 11200 | 0.1375 | 0.9654 |
| 0.0002 | 6.62 | 11300 | 0.1361 | 0.9661 |
| 0.0006 | 6.68 | 11400 | 0.1323 | 0.9675 |
| 0.0009 | 6.74 | 11500 | 0.1239 | 0.9692 |
| 0.0004 | 6.8 | 11600 | 0.1514 | 0.9631 |
| 0.0002 | 6.86 | 11700 | 0.1386 | 0.9664 |
| 0.0004 | 6.92 | 11800 | 0.1368 | 0.9659 |
| 0.0004 | 6.98 | 11900 | 0.1276 | 0.9684 |
| 0.0002 | 7.03 | 12000 | 0.1171 | 0.9712 |
| 0.0002 | 7.09 | 12100 | 0.1142 | 0.9711 |
| 0.0001 | 7.15 | 12200 | 0.1183 | 0.9727 |
| 0.0002 | 7.21 | 12300 | 0.1167 | 0.9732 |
| 0.0002 | 7.27 | 12400 | 0.1143 | 0.9737 |
| 0.0001 | 7.33 | 12500 | 0.1129 | 0.9737 |
| 0.0002 | 7.39 | 12600 | 0.1116 | 0.9742 |
| 0.0002 | 7.44 | 12700 | 0.1126 | 0.9745 |
| 0.0002 | 7.5 | 12800 | 0.1111 | 0.9748 |
| 0.0002 | 7.56 | 12900 | 0.1102 | 0.9747 |
| 0.0001 | 7.62 | 13000 | 0.1094 | 0.9747 |
| 0.0001 | 7.68 | 13100 | 0.1086 | 0.9742 |
| 0.0001 | 7.74 | 13200 | 0.1079 | 0.9748 |
| 0.0002 | 7.8 | 13300 | 0.1062 | 0.9754 |
| 0.0002 | 7.85 | 13400 | 0.1068 | 0.9757 |
| 0.0001 | 7.91 | 13500 | 0.1061 | 0.9762 |
| 0.0001 | 7.97 | 13600 | 0.1060 | 0.9761 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
thu-coai/EVA2.0-base | 5e560e37d230fee015571a8cbacc0bdbf70463e5 | 2022-07-25T03:50:58.000Z | [
"pytorch",
"zh",
"arxiv:2108.01547",
"arxiv:2203.09313",
"transformers",
"license:mit"
] | null | false | thu-coai | null | thu-coai/EVA2.0-base | 28 | null | transformers | 7,405 | ---
language: zh
tags:
- pytorch
license: mit
---
# EVA
## Model Description
EVA is the largest open-source Chinese dialogue model with up to 2.8B parameters. The 1.0 version model is pre-trained on [WudaoCorpus-Dialog](https://resource.wudaoai.cn/home), and the 2.0 version is pre-trained on a carefully cleaned version of WudaoCorpus-Dialog which yields better performance than the 1.0 version. [Paper link](https://arxiv.org/abs/2108.01547) of EVA1.0. [Paper link](https://arxiv.org/abs/2203.09313) of EVA2.0.
## Model Configuration
| Model | n_params | n_enc-layers | n_dec-layers | d_model | d_ff | n_heads | d_head | attn-scale |
| ------------- | -------- | ------------ | ------------ | ------- | ----- | ------- | ------ | ---------- |
| EVA1.0 | 2.8B | 24 | 24 | 2,048 | 5,120 | 32 | 64 | No |
| EVA2.0_Base | 300M | 12 | 12 | 768 | 3,072 | 12 | 64 | Yes |
| EVA2.0_Large | 970M | 24 | 24 | 1,024 | 4,096 | 16 | 64 | Yes |
| EVA2.0_xLarge | 2.8B | 24 | 24 | 2,048 | 5,120 | 32 | 64 | Yes |
## How to use
Please refer to our GitHub [repository](https://github.com/thu-coai/EVA/tree/huggingface).
## Performance
We use the KdConv dataset to finetune and evaluate the model. Under the default hyperparameters in the scripts, we get the following results on the test set:
| | | | | |
| ---- | ---- | ---- | ---- | ---- |
| | loss | BLEU-4 | Dist-4 | f1 |
| eva1.0 Before finetune | 3.49 | 0.64 | 85.94 | 13.24 |
| eva1.0 After finetune | 2.37 | 3.94 | 72.47 | 19.80 |
| eva2.0 Before finetune | 3.14 | 1.02 | 82.67 | 14.36 |
| eva2.0 After finetune | 2.09 | 4.69 | 73.03 | 22.91 |
We also use beam search to run the interactive inference of EVA2.0:
```[]
Usr >>> 你好
Sys >>> 你好,请问你是?
Usr >>> 我是小明
Sys >>> 好的,小明,我是新来的,请多关照
Usr >>> 你了解人工智能吗?
Sys >>> 不了解,我只知道我是机器人
Usr >>> 那你很聪明
Sys >>> 谢谢夸奖,我会继续努力的
Usr >>> 你是学什么专业的呀
Sys >>> 计算机科学与技术,你呢?
Usr >>> 我也是学计算机的
Sys >>> 哦,那你有什么想知道的,可以问我
Usr >>> 编程语言你学过哪些?
Sys >>> java,c,c+都学过
Usr >>> 你学的真多
Sys >>> 呵呵,还好吧,我比较喜欢写代码
Usr >>> 哈哈,我也喜欢写代码,以后可以多多交流
Sys >>> 嗯,好的
Usr >>> 拜拜喽
Sys >>> 拜~
```
**NOET**: Since different hardware may have different implementations of random functions, even if you use the same random seed as ours, you may not be able to reproduce this case. But the overall performance will not differ much.
## Disclaimer
The pre-trained models aim to facilitate the research for conversation generation. The model provided in this repository is trained on a large dataset collected from various sources. Although a rigorous cleaning and filtering process has been carried out to the data and the model output, there is no guarantee that all the inappropriate contents have been completely banned. All the contents generated by the model do not represent the authors' opinions. The decoding script provided in this repository is only for research purposes. We are not responsible for any content generated using our model.
## Citation
```
@article{coai2021eva,
title={EVA: An Open-Domain Chinese Dialogue System with Large-Scale Generative Pre-Training},
author={Zhou, Hao and Ke, Pei and Zhang, Zheng and Gu, Yuxian and Zheng, Yinhe and Zheng, Chujie and Wang, Yida and Wu, Chen Henry and Sun, Hao and Yang, Xiaocong and Wen, Bosi and Zhu, Xiaoyan and Huang, Minlie and Tang, Jie},
journal={arXiv preprint arXiv:2108.01547},
year={2021}
}
@article{coai2022eva2,
title={{EVA2.0}: Investigating Open-Domain Chinese Dialogue Systems with Large-Scale Pre-Training},
author={Gu, Yuxian and Wen, Jiaxin and Sun, Hao and Song, Yi and Ke, Pei and Zheng, Chujie and Zhang, Zheng and Yao, Jianzhu and Zhu, Xiaoyan and Tang, Jie and Huang, Minlie},
journal={arXiv preprint arXiv:2203.09313},
year={2022}
}
``` |
Yuetian/T5-finetuned-storyCommonsense | bb62c9d47bdd2d8feaf6370fa5f2c9d18bea5bc9 | 2022-07-28T02:17:53.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | Yuetian | null | Yuetian/T5-finetuned-storyCommonsense | 28 | null | transformers | 7,406 | ---
license: mit
---
|
wiselinjayajos/t5-end2end-questions-generation-cvqualtrics-squad-V1 | e58afb83431cbea25eeb092b011a040ef7fd6ced | 2022-07-28T06:56:16.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | wiselinjayajos | null | wiselinjayajos/t5-end2end-questions-generation-cvqualtrics-squad-V1 | 28 | null | transformers | 7,407 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-end2end-questions-generation-cvqualtrics-squad-V1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation-cvqualtrics-squad-V1
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6162 | 0.34 | 100 | 1.8890 |
| 1.9995 | 0.67 | 200 | 1.6871 |
| 1.8697 | 1.01 | 300 | 1.6146 |
| 1.7682 | 1.34 | 400 | 1.5530 |
| 1.7323 | 1.68 | 500 | 1.5232 |
| 1.7256 | 2.01 | 600 | 1.4921 |
| 1.6506 | 2.35 | 700 | 1.4640 |
| 1.6438 | 2.68 | 800 | 1.4406 |
| 1.6399 | 3.02 | 900 | 1.4137 |
| 1.5786 | 3.36 | 1000 | 1.3924 |
| 1.5805 | 3.69 | 1100 | 1.3788 |
| 1.5824 | 4.03 | 1200 | 1.3626 |
| 1.5295 | 4.36 | 1300 | 1.3454 |
| 1.5333 | 4.7 | 1400 | 1.3356 |
| 1.537 | 5.03 | 1500 | 1.3230 |
| 1.5002 | 5.37 | 1600 | 1.3157 |
| 1.4936 | 5.7 | 1700 | 1.3046 |
| 1.4937 | 6.04 | 1800 | 1.2958 |
| 1.4649 | 6.38 | 1900 | 1.2826 |
| 1.4742 | 6.71 | 2000 | 1.2744 |
| 1.4641 | 7.05 | 2100 | 1.2603 |
| 1.4472 | 7.38 | 2200 | 1.2595 |
| 1.4403 | 7.72 | 2300 | 1.2526 |
| 1.4508 | 8.05 | 2400 | 1.2475 |
| 1.4191 | 8.39 | 2500 | 1.2412 |
| 1.4367 | 8.72 | 2600 | 1.2354 |
| 1.4272 | 9.06 | 2700 | 1.2386 |
| 1.4104 | 9.4 | 2800 | 1.2323 |
| 1.4179 | 9.73 | 2900 | 1.2337 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
SharpAI/mal_tls-bert-base-w8a8 | 89dc967a2e47be6711447d0682c3e530174ac3d8 | 2022-07-28T06:40:11.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"transformers",
"generated_from_keras_callback",
"model-index"
] | text-classification | false | SharpAI | null | SharpAI/mal_tls-bert-base-w8a8 | 28 | null | transformers | 7,408 | ---
tags:
- generated_from_keras_callback
model-index:
- name: mal_tls-bert-base-w8a8
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mal_tls-bert-base-w8a8
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.10.3
|
BigSalmon/MrLincoln3 | c5ab836cbfdb585fef096e44eb7250e7f6364435 | 2021-11-18T23:30:03.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/MrLincoln3 | 27 | null | transformers | 7,409 | Entry not found |
Elron/bleurt-large-128 | 17bb269ba6cede0f50f3831f444fdb7222147ceb | 2021-10-04T13:21:56.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Elron | null | Elron/bleurt-large-128 | 27 | 1 | transformers | 7,410 | \n## BLEURT
Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224).
## Usage Example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-large-128")
model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-large-128")
model.eval()
references = ["hello world", "hello world"]
candidates = ["hi universe", "bye world"]
with torch.no_grad():
scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze()
print(scores) # tensor([ 0.0020, -0.6647])
```
|
GKLMIP/bert-khmer-small-uncased | fe6017da32090699c8c115f17f4258ca6d5e495b | 2021-07-31T04:46:38.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | GKLMIP | null | GKLMIP/bert-khmer-small-uncased | 27 | null | transformers | 7,411 | https://github.com/GKLMIP/Pretrained-Models-For-Khmer
If you use our model, please consider citing our paper:
```
@article{,
author="Jiang, Shengyi
and Fu, Sihui
and Lin, Nankai
and Fu, Yingwen",
title="Pre-trained Models and Evaluation Data for the Khmer Language",
year="2021",
publisher="Tsinghua Science and Technology",
}
``` |
GroNLP/gpt2-small-dutch-embeddings | 845a4c7cdae998c888f6ed5932a0a2a1732d0104 | 2021-05-21T09:54:45.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"nl",
"arxiv:2012.05628",
"transformers",
"adaption",
"recycled",
"gpt2-small"
] | text-generation | false | GroNLP | null | GroNLP/gpt2-small-dutch-embeddings | 27 | null | transformers | 7,412 | ---
language: nl
tags:
- adaption
- recycled
- gpt2-small
pipeline_tag: text-generation
---
# GPT-2 recycled for Dutch (small, adapted lexical embeddings)
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
This model is based on the small OpenAI GPT-2 ([`gpt2`](https://huggingface.co/gpt2)) model.
The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for a Dutch vocabulary.
For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle).
## Related models
### Dutch
- [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings.
### Italian
- [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings.
## How to use
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="GroNLP/gpt2-small-dutch-embeddings")
```
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-dutch-embeddings")
model = AutoModel.from_pretrained("GroNLP/gpt2-small-dutch-embeddings") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-dutch-embeddings") # Tensorflow
```
## BibTeX entry
```bibtex
@misc{devries2020good,
title={As good as new. How to successfully recycle English GPT-2 to make models for other languages},
author={Wietse de Vries and Malvina Nissim},
year={2020},
eprint={2012.05628},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Helsinki-NLP/opus-mt-en-ht | d90d52dc58d651b41475d5837f670b411150be90 | 2021-09-09T21:36:01.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ht",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-ht | 27 | null | transformers | 7,413 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ht
* source languages: en
* target languages: ht
* OPUS readme: [en-ht](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ht/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ht/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ht/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ht/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ht | 38.3 | 0.545 |
| Tatoeba.en.ht | 45.2 | 0.592 |
|
Helsinki-NLP/opus-mt-et-fr | 7bc1a38b3451bb731b5f4e0b3a2a04df5aca9618 | 2021-09-09T21:46:12.000Z | [
"pytorch",
"marian",
"text2text-generation",
"et",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-et-fr | 27 | null | transformers | 7,414 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-et-fr
* source languages: et
* target languages: fr
* OPUS readme: [et-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/et-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/et-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/et-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/et-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.et.fr | 26.2 | 0.484 |
|
KoichiYasuoka/roberta-base-thai-spm-upos | c7d621d5ca774b438a464aaef15fba17f1a91a02 | 2022-04-12T10:29:52.000Z | [
"pytorch",
"roberta",
"token-classification",
"th",
"dataset:universal_dependencies",
"transformers",
"thai",
"pos",
"wikipedia",
"dependency-parsing",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/roberta-base-thai-spm-upos | 27 | null | transformers | 7,415 | ---
language:
- "th"
tags:
- "thai"
- "token-classification"
- "pos"
- "wikipedia"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "token-classification"
widget:
- text: "หลายหัวดีกว่าหัวเดียว"
---
# roberta-base-thai-spm-upos
## Model Description
This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from [roberta-base-thai-spm](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-spm-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-thai-spm-upos")
s="หลายหัวดีกว่าหัวเดียว"
t=tokenizer.tokenize(s)
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(t,p)))
```
or
```
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-base-thai-spm-upos")
print(nlp("หลายหัวดีกว่าหัวเดียว"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
Maha/hi-const21-hibert_final | 0d143967c20d19c5a57787ebe898ed100ed55b9c | 2022-02-23T10:31:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Maha | null | Maha/hi-const21-hibert_final | 27 | null | transformers | 7,416 | Entry not found |
Nhut/wav2vec2-large-xlsr-vietnamese | e58b08cf2c973426134a0ccf0c626aa5d8bf4018 | 2021-07-05T16:30:29.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"vi",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Nhut | null | Nhut/wav2vec2-large-xlsr-vietnamese | 27 | null | transformers | 7,417 | ---
language: vi
datasets:
- common_voice
- FOSD: https://data.mendeley.com/datasets/k9sxg2twv4/4
- VIVOS: https://ailab.hcmus.edu.vn/vivos
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Vietnamese by Nhut
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice vi
type: common_voice
args: vi
metrics:
- name: Test WER
type: wer
value: 49.59
---
# Wav2Vec2-Large-XLSR-53-Vietnamese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Vietnamese using the [Common Voice](https://huggingface.co/datasets/common_voice), [FOSD](https://data.mendeley.com/datasets/k9sxg2twv4/4) and [VIVOS](https://ailab.hcmus.edu.vn/vivos).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
ENCODER = {
"ia ": "iê ",
"ìa ": "iề ",
"ía ": "iế ",
"ỉa ": "iể ",
"ĩa ": "iễ ",
"ịa ": "iệ ",
"ya ": "yê ",
"ỳa ": "yề ",
"ýa ": "yế ",
"ỷa ": "yể ",
"ỹa ": "yễ ",
"ỵa ": "yệ ",
"ua ": "uô ",
"ùa ": "uồ ",
"úa ": "uố ",
"ủa ": "uổ ",
"ũa ": "uỗ ",
"ụa ": "uộ ",
"ưa ": "ươ ",
"ừa ": "ườ ",
"ứa ": "ướ ",
"ửa ": "ưở ",
"ữa ": "ưỡ ",
"ựa ": "ượ ",
"ke": "ce",
"kè": "cè",
"ké": "cé",
"kẻ": "cẻ",
"kẽ": "cẽ",
"kẹ": "cẹ",
"kê": "cê",
"kề": "cề",
"kế": "cế",
"kể": "cể",
"kễ": "cễ",
"kệ": "cệ",
"ki": "ci",
"kì": "cì",
"kí": "cí",
"kỉ": "cỉ",
"kĩ": "cĩ",
"kị": "cị",
"ky": "cy",
"kỳ": "cỳ",
"ký": "cý",
"kỷ": "cỷ",
"kỹ": "cỹ",
"kỵ": "cỵ",
"ghe": "ge",
"ghè": "gè",
"ghé": "gé",
"ghẻ": "gẻ",
"ghẽ": "gẽ",
"ghẹ": "gẹ",
"ghê": "gê",
"ghề": "gề",
"ghế": "gế",
"ghể": "gể",
"ghễ": "gễ",
"ghệ": "gệ",
"ngh": "\x80",
"uyê": "\x96",
"uyề": "\x97",
"uyế": "\x98",
"uyể": "\x99",
"uyễ": "\x9a",
"uyệ": "\x9b",
"ng": "\x81",
"ch": "\x82",
"gh": "\x83",
"nh": "\x84",
"gi": "\x85",
"ph": "\x86",
"kh": "\x87",
"th": "\x88",
"tr": "\x89",
"uy": "\x8a",
"uỳ": "\x8b",
"uý": "\x8c",
"uỷ": "\x8d",
"uỹ": "\x8e",
"uỵ": "\x8f",
"iê": "\x90",
"iề": "\x91",
"iế": "\x92",
"iể": "\x93",
"iễ": "\x94",
"iệ": "\x95",
"uô": "\x9c",
"uồ": "\x9d",
"uố": "\x9e",
"uổ": "\x9f",
"uỗ": "\xa0",
"uộ": "\xa1",
"ươ": "\xa2",
"ườ": "\xa3",
"ướ": "\xa4",
"ưở": "\xa5",
"ưỡ": "\xa6",
"ượ": "\xa7",
}
def decode_string(x):
for k, v in list(reversed(list(ENCODER.items()))):
x = x.replace(v, k)
return x
test_dataset = load_dataset("common_voice", "vi", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("Nhut/wav2vec2-large-xlsr-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("Nhut/wav2vec2-large-xlsr-vietnamese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", [decode_string(x) for x in processor.batch_decode(predicted_ids)])
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Vietnamese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
ENCODER = {
"ia ": "iê ",
"ìa ": "iề ",
"ía ": "iế ",
"ỉa ": "iể ",
"ĩa ": "iễ ",
"ịa ": "iệ ",
"ya ": "yê ",
"ỳa ": "yề ",
"ýa ": "yế ",
"ỷa ": "yể ",
"ỹa ": "yễ ",
"ỵa ": "yệ ",
"ua ": "uô ",
"ùa ": "uồ ",
"úa ": "uố ",
"ủa ": "uổ ",
"ũa ": "uỗ ",
"ụa ": "uộ ",
"ưa ": "ươ ",
"ừa ": "ườ ",
"ứa ": "ướ ",
"ửa ": "ưở ",
"ữa ": "ưỡ ",
"ựa ": "ượ ",
"ke": "ce",
"kè": "cè",
"ké": "cé",
"kẻ": "cẻ",
"kẽ": "cẽ",
"kẹ": "cẹ",
"kê": "cê",
"kề": "cề",
"kế": "cế",
"kể": "cể",
"kễ": "cễ",
"kệ": "cệ",
"ki": "ci",
"kì": "cì",
"kí": "cí",
"kỉ": "cỉ",
"kĩ": "cĩ",
"kị": "cị",
"ky": "cy",
"kỳ": "cỳ",
"ký": "cý",
"kỷ": "cỷ",
"kỹ": "cỹ",
"kỵ": "cỵ",
"ghe": "ge",
"ghè": "gè",
"ghé": "gé",
"ghẻ": "gẻ",
"ghẽ": "gẽ",
"ghẹ": "gẹ",
"ghê": "gê",
"ghề": "gề",
"ghế": "gế",
"ghể": "gể",
"ghễ": "gễ",
"ghệ": "gệ",
"ngh": "\x80",
"uyê": "\x96",
"uyề": "\x97",
"uyế": "\x98",
"uyể": "\x99",
"uyễ": "\x9a",
"uyệ": "\x9b",
"ng": "\x81",
"ch": "\x82",
"gh": "\x83",
"nh": "\x84",
"gi": "\x85",
"ph": "\x86",
"kh": "\x87",
"th": "\x88",
"tr": "\x89",
"uy": "\x8a",
"uỳ": "\x8b",
"uý": "\x8c",
"uỷ": "\x8d",
"uỹ": "\x8e",
"uỵ": "\x8f",
"iê": "\x90",
"iề": "\x91",
"iế": "\x92",
"iể": "\x93",
"iễ": "\x94",
"iệ": "\x95",
"uô": "\x9c",
"uồ": "\x9d",
"uố": "\x9e",
"uổ": "\x9f",
"uỗ": "\xa0",
"uộ": "\xa1",
"ươ": "\xa2",
"ườ": "\xa3",
"ướ": "\xa4",
"ưở": "\xa5",
"ưỡ": "\xa6",
"ượ": "\xa7",
}
def decode_string(x):
for k, v in list(reversed(list(ENCODER.items()))):
x = x.replace(v, k)
return x
test_dataset = load_dataset("common_voice", "vi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Nhut/wav2vec2-large-xlsr-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("Nhut/wav2vec2-large-xlsr-vietnamese")
model.to("cuda")
chars_to_ignore_regex = '[\\\+\@\ǀ\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
# decode_string: We replace the encoded letter with the initial letters
batch["pred_strings"] = [decode_string(x) for x in batch["pred_strings"]]
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 49.59 %
## Training
The Common Voice `train`, `validation` and FOSD datasets and VIVOS datasets were used for training as well.
The script used for training can be found [here](https://colab.research.google.com/drive/11pP4uVJj4SYZTzGjlCUtOHywlhYqs0cPx) |
SEBIS/code_trans_t5_base_source_code_summarization_python_transfer_learning_finetune | 79c990003500c7e804b84ab057fed663b4f57711 | 2021-06-23T05:25:27.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_source_code_summarization_python_transfer_learning_finetune | 27 | null | transformers | 7,418 | ---
tags:
- summarization
widget:
- text: '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
---
# CodeTrans model for source code summarization python
Pretrained model on programming language python using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the python code snippets.
## Intended uses & limitations
The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_python_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_python_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/source%20code%20summarization/python/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
Sakonii/distilbert-base-nepali | 723fe4e63deb67d14412ee69ba0f9daddd8c752a | 2022-03-11T12:47:18.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"arxiv:1911.02116",
"arxiv:1910.01108",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Sakonii | null | Sakonii/distilbert-base-nepali | 27 | null | transformers | 7,419 | ---
license: apache-2.0
mask_token: "<mask>"
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-nepali
results: []
widget:
- text: "मानविय गतिविधिले प्रातृतिक पर्यावरन प्रनालीलाई अपरिमेय क्षति पु्र्याएको छ। परिवर्तनशिल जलवायुले खाध, सुरक्षा, <mask>, जमिन, मौसमलगायतलाई असंख्य तरिकाले प्रभावित छ।"
example_title: "Example 1"
- text: "अचेल विद्यालय र कलेजहरूले स्मारिका कत्तिको प्रकाशन गर्छन्, यकिन छैन । केही वर्षपहिलेसम्म गाउँसहरका सानाठूला <mask> संस्थाहरूमा पुग्दा शिक्षक वा कर्मचारीले संस्थाबाट प्रकाशित पत्रिका, स्मारिका र पुस्तक कोसेलीका रूपमा थमाउँथे ।"
example_title: "Example 2"
- text: "जलविद्युत् विकासको ११० वर्षको इतिहास बनाएको नेपालमा हाल सरकारी र निजी क्षेत्रबाट गरी करिब २ हजार मेगावाट <mask> उत्पादन भइरहेको छ ।"
example_title: "Example 3"
---
# distilbert-base-nepali
This model is pre-trained on [nepalitext](https://huggingface.co/datasets/Sakonii/nepalitext-language-model-dataset) dataset consisting of over 13 million Nepali text sequences using a masked language modeling (MLM) objective. Our approach trains a Sentence Piece Model (SPM) for text tokenization similar to [XLM-ROBERTa](https://arxiv.org/abs/1911.02116) and trains [distilbert model](https://arxiv.org/abs/1910.01108) for language modeling.
It achieves the following results on the evaluation set:
mlm probability|evaluation loss|evaluation perplexity
--:|----:|-----:|
15%|2.349|10.479|
20%|2.605|13.351|
## Model description
Refer to original [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased)
## Intended uses & limitations
This backbone model intends to be fine-tuned on Nepali language focused downstream task such as sequence classification, token classification or question answering.
The language model being trained on a data with texts grouped to a block size of 512, it handles text sequence up to 512 tokens and may not perform satisfactorily on shorter sequences.
## Usage
This model can be used directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Sakonii/distilbert-base-nepali')
>>> unmasker("मानविय गतिविधिले प्रातृतिक पर्यावरन प्रनालीलाई अपरिमेय क्षति पु्र्याएको छ। परिवर्तनशिल जलवायुले खाध, सुरक्षा, <mask>, जमिन, मौसमलगायतलाई असंख्य तरिकाले प्रभावित छ।")
[{'score': 0.04128897562623024,
'sequence': 'मानविय गतिविधिले प्रातृतिक पर्यावरन प्रनालीलाई अपरिमेय क्षति पु्र्याएको छ। परिवर्तनशिल जलवायुले खाध, सुरक्षा, मौसम, जमिन, मौसमलगायतलाई असंख्य तरिकाले प्रभावित छ।',
'token': 2605,
'token_str': 'मौसम'},
{'score': 0.04100276157259941,
'sequence': 'मानविय गतिविधिले प्रातृतिक पर्यावरन प्रनालीलाई अपरिमेय क्षति पु्र्याएको छ। परिवर्तनशिल जलवायुले खाध, सुरक्षा, प्रकृति, जमिन, मौसमलगायतलाई असंख्य तरिकाले प्रभावित छ।',
'token': 2792,
'token_str': 'प्रकृति'},
{'score': 0.026525357738137245,
'sequence': 'मानविय गतिविधिले प्रातृतिक पर्यावरन प्रनालीलाई अपरिमेय क्षति पु्र्याएको छ। परिवर्तनशिल जलवायुले खाध, सुरक्षा, पानी, जमिन, मौसमलगायतलाई असंख्य तरिकाले प्रभावित छ।',
'token': 387,
'token_str': 'पानी'},
{'score': 0.02340106852352619,
'sequence': 'मानविय गतिविधिले प्रातृतिक पर्यावरन प्रनालीलाई अपरिमेय क्षति पु्र्याएको छ। परिवर्तनशिल जलवायुले खाध, सुरक्षा, जल, जमिन, मौसमलगायतलाई असंख्य तरिकाले प्रभावित छ।',
'token': 1313,
'token_str': 'जल'},
{'score': 0.02055591531097889,
'sequence': 'मानविय गतिविधिले प्रातृतिक पर्यावरन प्रनालीलाई अपरिमेय क्षति पु्र्याएको छ। परिवर्तनशिल जलवायुले खाध, सुरक्षा, वातावरण, जमिन, मौसमलगायतलाई असंख्य तरिकाले प्रभावित छ।',
'token': 790,
'token_str': 'वातावरण'}]
```
Here is how we can use the model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('Sakonii/distilbert-base-nepali')
model = AutoModelForMaskedLM.from_pretrained('Sakonii/distilbert-base-nepali')
# prepare input
text = "चाहिएको text यता राख्नु होला।"
encoded_input = tokenizer(text, return_tensors='pt')
# forward pass
output = model(**encoded_input)
```
## Training data
This model is trained on [nepalitext](https://huggingface.co/datasets/Sakonii/nepalitext-language-model-dataset) language modeling dataset which combines the datasets: [OSCAR](https://huggingface.co/datasets/oscar) , [cc100](https://huggingface.co/datasets/cc100) and a set of scraped Nepali articles on Wikipedia.
As for training the language model, the texts in the training set are grouped to a block of 512 tokens.
## Tokenization
A Sentence Piece Model (SPM) is trained on a subset of [nepalitext](https://huggingface.co/datasets/Sakonii/nepalitext-language-model-dataset) dataset for text tokenization. The tokenizer trained with vocab-size=24576, min-frequency=4, limit-alphabet=1000 and model-max-length=512.
## Training procedure
The model is trained with the same configuration as the original [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased); 512 tokens per instance, 28 instances per batch, and around 35.7K training steps.
### Training hyperparameters
The following hyperparameters were used for training of the final epoch: [ Refer to the *Training results* table below for varying hyperparameters every epoch ]
- learning_rate: 5e-05
- train_batch_size: 28
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
The model is trained for 4 epochs with varying hyperparameters:
| Training Loss | Epoch | MLM Probability | Train Batch Size | Step | Validation Loss | Perplexity |
|:-------------:|:-----:|:---------------:|:----------------:|:-----:|:---------------:|:----------:|
| 3.4477 | 1.0 | 15 | 26 | 38864 | 3.3067 | 27.2949 |
| 2.9451 | 2.0 | 15 | 28 | 35715 | 2.8238 | 16.8407 |
| 2.866 | 3.0 | 20 | 28 | 35715 | 2.7431 | 15.5351 |
| 2.7287 | 4.0 | 20 | 28 | 35715 | 2.6053 | 13.5353 |
| 2.6412 | 5.0 | 20 | 28 | 35715 | 2.5161 | 12.3802 |
Final model evaluated with MLM Probability of 15%:
| Training Loss | Epoch | MLM Probability | Train Batch Size | Step | Validation Loss | Perplexity |
|:-------------:|:-----:|:---------------:|:----------------:|:-----:|:---------------:|:----------:|
| - | - | 15 | - | - | 2.3494 | 10.4791 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Salesforce/qaconv-unifiedqa-t5-large | cfd08ce057a509a850fe14089ea828bc5e19c1d9 | 2021-06-23T10:18:29.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Salesforce | null | Salesforce/qaconv-unifiedqa-t5-large | 27 | null | transformers | 7,420 | Entry not found |
Tsubasaz/clinical-bert-base-128 | 10c960ca02dfaf6a4193506555adbe79f3ea7150 | 2022-02-21T11:31:51.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Tsubasaz | null | Tsubasaz/clinical-bert-base-128 | 27 | null | transformers | 7,421 | Entry not found |
antoiloui/netbert | 61624e3baf1b266be5b09c29948386f5c907cb6e | 2021-05-18T23:44:04.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"en",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | antoiloui | null | antoiloui/netbert | 27 | null | transformers | 7,422 | ---
language:
- en
license:
- mit
widget:
- text: "The nodes of a computer network may include [MASK]."
---
# NetBERT 📶
**A BERT-base model pre-trained on a huge corpus of computer networking text (~23Gb)**.
## Usage
You can use NetBERT with [🤗 transformers](https://github.com/huggingface/transformers):
```python
import torch
from transformers import BertTokenizer, BertForMaskedLM
# Load pretrained model and tokenizer
model = BertForMaskedLM.from_pretrained("antoiloui/netbert")
tokenizer = BertTokenizer.from_pretrained("antoiloui/netbert")
```
## Documentation
Detailed documentation on the pre-trained model, its implementation, and the data can be found [here](https://github.com/antoiloui/netbert/blob/master/docs/index.md).
## Citation
For attribution in academic contexts, please cite this work as:
```
@mastersthesis{louis2020netbert,
title={NetBERT: A Pre-trained Language Representation Model for Computer Networking},
author={Louis, Antoine},
year={2020},
school={University of Liege}
}
``` |
boychaboy/SNLI_roberta-base | 7c713cc2acbb5c9650fe40582b16a2b100f54ab6 | 2021-05-20T14:36:00.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | boychaboy | null | boychaboy/SNLI_roberta-base | 27 | null | transformers | 7,423 | Entry not found |
cahya/bert-base-indonesian-tydiqa | 6f300216201f1b4942633329b0ba5e7511dfe61e | 2021-05-19T13:41:43.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | cahya | null | cahya/bert-base-indonesian-tydiqa | 27 | null | transformers | 7,424 | Entry not found |
cointegrated/rubert-base-lesha17-punctuation | eb42c9c9b3d20885594e19b11171af21aa54ec9d | 2021-11-15T07:36:53.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | cointegrated | null | cointegrated/rubert-base-lesha17-punctuation | 27 | 1 | transformers | 7,425 | The model for https://github.com/Lesha17/Punctuation; all credits go to the owner of this repository. |
facebook/convnext-large-224-22k-1k | 3f11dd4165e438cea1d06e923416fc7c29917d05 | 2022-02-26T12:21:11.000Z | [
"pytorch",
"tf",
"convnext",
"image-classification",
"dataset:imagenet-21k",
"arxiv:2201.03545",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/convnext-large-224-22k-1k | 27 | null | transformers | 7,426 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-21k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXT (large-sized model)
ConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt).
Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-large-224-22k-1k")
model = ConvNextForImageClassification.from_pretrained("facebook/convnext-large-224-22k-1k")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1k ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2201-03545,
author = {Zhuang Liu and
Hanzi Mao and
Chao{-}Yuan Wu and
Christoph Feichtenhofer and
Trevor Darrell and
Saining Xie},
title = {A ConvNet for the 2020s},
journal = {CoRR},
volume = {abs/2201.03545},
year = {2022},
url = {https://arxiv.org/abs/2201.03545},
eprinttype = {arXiv},
eprint = {2201.03545},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
facebook/wav2vec2-base-fr-voxpopuli | 93a9c011832d9559627bd4402fd7740ca966626d | 2021-07-06T01:54:24.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"fr",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-base-fr-voxpopuli | 27 | null | transformers | 7,427 | ---
language: fr
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Base-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the fr unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
flax-community/roberta-base-mr | 64d2c745f264f09c3d5b678a718746b2613887db | 2021-07-17T15:30:40.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"arxiv:1907.11692",
"transformers",
"autotrain_compatible"
] | fill-mask | false | flax-community | null | flax-community/roberta-base-mr | 27 | 1 | transformers | 7,428 | ---
widget:
- text: "अध्यक्ष <mask> पवार आणि उपमुख्यमंत्री अजित पवार यांची भेट घेतली."
- text: "मोठी बातमी! उद्या दुपारी <mask> वाजता जाहीर होणार दहावीचा निकाल"
---
# RoBERTa base model for Marathi language (मराठी भाषा)
Pretrained model on Marathi language using a masked language modeling (MLM) objective. RoBERTa was introduced in
[this paper](https://arxiv.org/abs/1907.11692) and first released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). We trained RoBERTa model for Marathi Language during community week hosted by Huggingface 🤗 using JAX/Flax for NLP & CV jax.
<img src="https://user-images.githubusercontent.com/15062408/126040902-ea8808db-ec30-4a3f-bf95-5d3b10d674e9.png" alt="huggingface-marathi-roberta" width="350" height="350" style="text-align: center">
## Model description
Marathi RoBERTa is a transformers model pretrained on a large corpus of Marathi data in a self-supervised fashion.
## Intended uses & limitations❗️
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. We used this model to fine tune on text classification task for iNLTK and indicNLP news text classification problem statement. Since marathi mc4 dataset is made by scraping marathi newspapers text, it will involve some biases which will also affect all fine-tuned versions of this model.
### How to use❓
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='flax-community/roberta-base-mr')
>>> unmasker("मोठी बातमी! उद्या दुपारी <mask> वाजता जाहीर होणार दहावीचा निकाल")
[{'score': 0.057209037244319916,'sequence': 'मोठी बातमी! उद्या दुपारी आठ वाजता जाहीर होणार दहावीचा निकाल',
'token': 2226,
'token_str': 'आठ'},
{'score': 0.02796074189245701,
'sequence': 'मोठी बातमी! उद्या दुपारी २० वाजता जाहीर होणार दहावीचा निकाल',
'token': 987,
'token_str': '२०'},
{'score': 0.017235398292541504,
'sequence': 'मोठी बातमी! उद्या दुपारी नऊ वाजता जाहीर होणार दहावीचा निकाल',
'token': 4080,
'token_str': 'नऊ'},
{'score': 0.01691395975649357,
'sequence': 'मोठी बातमी! उद्या दुपारी २१ वाजता जाहीर होणार दहावीचा निकाल',
'token': 1944,
'token_str': '२१'},
{'score': 0.016252165660262108,
'sequence': 'मोठी बातमी! उद्या दुपारी ३ वाजता जाहीर होणार दहावीचा निकाल',
'token': 549,
'token_str': ' ३'}]
```
## Training data 🏋🏻♂️
The RoBERTa Marathi model was pretrained on `mr` dataset of C4 multilingual dataset:
<br>
<br>
[C4 (Colossal Clean Crawled Corpus)](https://yknzhu.wixsite.com/mbweb), Introduced by Raffel et al. in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://paperswithcode.com/paper/exploring-the-limits-of-transfer-learning).
The dataset can be downloaded in a pre-processed form from [allennlp](https://github.com/allenai/allennlp/discussions/5056) or huggingface's datsets - [mc4 dataset](https://huggingface.co/datasets/mc4).
Marathi (`mr`) dataset consists of 14 billion tokens, 7.8 million docs and with weight ~70 GB of text.
## Data Cleaning 🧹
Though initial `mc4` marathi corpus size ~70 GB, Through data exploration, it was observed it contains docs from different languages especially thai, chinese etc. So we had to clean the dataset before traning tokenizer and model. Surprisingly, results after cleaning Marathi mc4 corpus data:
#### **Train set:**
Clean docs count 1581396 out of 7774331. <br>
**~20.34%** of whole marathi train split is actually Marathi.
#### **Validation set**
Clean docs count 1700 out of 7928. <br>
**~19.90%** of whole marathi validation split is actually Marathi.
## Training procedure 👨🏻💻
### Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with `<s>` and the end of one by `</s>`
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
### Pretraining
The model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores) **8 v3 TPU cores** for 42K steps with a batch size of 128 and a sequence length of 128. The
optimizer used is Adam with a learning rate of 3e-4, β1 = 0.9, β2 = 0.98 and
ε = 1e-8, a weight decay of 0.01, learning rate warmup for 1,000 steps and linear decay of the learning
rate after.
We tracked experiments and hyperparameter tunning on weights and biases platform. Here is link to main dashboard: <br>
[Link to Weights and Biases Dashboard for Marathi RoBERTa model](https://wandb.ai/nipunsadvilkar/roberta-base-mr/runs/19qtskbg?workspace=user-nipunsadvilkar)
#### **Pretraining Results 📊**
RoBERTa Model reached **eval accuracy of 85.28%** around ~35K step **with train loss at 0.6507 and eval loss at 0.6219**.
## Fine Tuning on downstream tasks
We performed fine-tuning on downstream tasks. We used following datasets for classification:
1. [IndicNLP Marathi news classification](https://github.com/ai4bharat-indicnlp/indicnlp_corpus#publicly-available-classification-datasets)
2. [iNLTK Marathi news headline classification](https://www.kaggle.com/disisbig/marathi-news-dataset)
### **Fine tuning on downstream task results (Segregated)**
#### 1. [IndicNLP Marathi news classification](https://github.com/ai4bharat-indicnlp/indicnlp_corpus#publicly-available-classification-datasets)
IndicNLP Marathi news dataset consists 3 classes - `['lifestyle', 'entertainment', 'sports']` - with following docs distribution as per classes:
| train | eval | test
| -- | -- | --
| 9672 | 477 | 478
💯 Our Marathi RoBERTa **`roberta-base-mr` model outperformed both classifier ** mentioned in [Arora, G. (2020). iNLTK](https://www.semanticscholar.org/paper/iNLTK%3A-Natural-Language-Toolkit-for-Indic-Languages-Arora/5039ed9e100d3a1cbbc25a02c82f6ee181609e83/figure/3) and [Kunchukuttan, Anoop et al. AI4Bharat-IndicNLP.](https://www.semanticscholar.org/paper/AI4Bharat-IndicNLP-Corpus%3A-Monolingual-Corpora-and-Kunchukuttan-Kakwani/7997d432925aff0ba05497d2893c09918298ca55/figure/4)
Dataset | FT-W | FT-WC | INLP | iNLTK | **roberta-base-mr 🏆**
-- | -- | -- | -- | -- | --
iNLTK Headlines | 83.06 | 81.65 | 89.92 | 92.4 | **97.48**
**🤗 Huggingface Model hub repo:**<br>
`roberta-base-mr` fine tuned on iNLTK Headlines classification dataset model:
[**`flax-community/mr-indicnlp-classifier`**](https://huggingface.co/flax-community/mr-indicnlp-classifier)
🧪 Fine tuning experiment's weight and biases dashboard [link](https://wandb.ai/nipunsadvilkar/huggingface/runs/1242bike?workspace=user-nipunsadvilkar
)
#### 2. [iNLTK Marathi news headline classification](https://www.kaggle.com/disisbig/marathi-news-dataset)
This dataset consists 3 classes - `['state', 'entertainment', 'sports']` - with following docs distribution as per classes:
| train | eval | test
| -- | -- | --
| 9658 | 1210 | 1210
💯 Here as well **`roberta-base-mr` outperformed `iNLTK` marathi news text classifier**.
Dataset | iNLTK ULMFiT | **roberta-base-mr 🏆**
-- | -- | --
iNLTK news dataset (kaggle) | 92.4 | **94.21**
**🤗 Huggingface Model hub repo:**<br>
`roberta-base-mr` fine tuned on iNLTK news classification dataset model:
[**`flax-community/mr-inltk-classifier`**](https://huggingface.co/flax-community/mr-inltk-classifier)
Fine tuning experiment's weight and biases dashboard [link](https://wandb.ai/nipunsadvilkar/huggingface/runs/2u5l9hon?workspace=user-nipunsadvilkar
)
## **Want to check how above models generalise on real world Marathi data?**
Head to 🤗 Huggingface's spaces 🪐 to play with all three models:
1. Mask Language Modelling with Pretrained Marathi RoBERTa model: <br>
[**`flax-community/roberta-base-mr`**](https://huggingface.co/flax-community/roberta-base-mr)
2. Marathi Headline classifier: <br>
[**`flax-community/mr-indicnlp-classifier`**](https://huggingface.co/flax-community/mr-indicnlp-classifier)
3. Marathi news classifier: <br>
[**`flax-community/mr-inltk-classifier`**](https://huggingface.co/flax-community/mr-inltk-classifier)

[Streamlit app of Pretrained Roberta Marathi model on Huggingface Spaces](https://huggingface.co/spaces/flax-community/roberta-base-mr)

## Team Members
- Nipun Sadvilkar [@nipunsadvilkar](https://github.com/nipunsadvilkar)
- Haswanth Aekula [@hassiahk](https://github.com/hassiahk)
## Credits
Huge thanks to Huggingface 🤗 & Google Jax/Flax team for such a wonderful community week. Especially for providing such massive computing resource. Big thanks to [@patil-suraj](https://github.com/patil-suraj) & [@patrickvonplaten](https://github.com/patrickvonplaten) for mentoring during whole week.
<img src=https://pbs.twimg.com/media/E443fPjX0AY1BsR.jpg:large>
|
genggui001/chinese_roberta_wwm_large_ext_fix_mlm | 9fbeb205b3d1a5c522b6d9e2243f7eb485689dee | 2021-11-05T08:28:59.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | genggui001 | null | genggui001/chinese_roberta_wwm_large_ext_fix_mlm | 27 | 1 | transformers | 7,429 | ---
language:
- zh
tags:
- bert
license: "apache-2.0"
---
# Please use 'Bert' related functions to load this model!
## Chinese BERT with Whole Word Masking Fix MLM Parameters
Init parameters by https://huggingface.co/hfl/chinese-roberta-wwm-ext-large
miss mlm parameters issue https://github.com/ymcui/Chinese-BERT-wwm/issues/98
Only train MLM parameters and freeze other parameters
More info in github https://github.com/genggui001/chinese_roberta_wwm_large_ext_fix_mlm
|
gurkan08/turkish-product-comment-sentiment-classification | 5ad35337c1346b6389f59084a615c04333ac2bff | 2021-05-19T17:53:17.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | gurkan08 | null | gurkan08/turkish-product-comment-sentiment-classification | 27 | null | transformers | 7,430 | Entry not found |
howey/electra-large-sst2 | 1503cf43cc086149796684ba6e266b0c4e4907d2 | 2021-06-04T06:39:18.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | howey | null | howey/electra-large-sst2 | 27 | null | transformers | 7,431 | Entry not found |
howey/roberta-large-cola | 6ab505e7ac0d09b6034435a0147ab5a6c0d4a7e4 | 2021-06-03T11:38:38.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | howey | null | howey/roberta-large-cola | 27 | null | transformers | 7,432 | Entry not found |
huggingtweets/footy_headlines | eb647fbe208daba06c955aacff45932a5a42fb3b | 2021-05-22T04:25:53.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/footy_headlines | 27 | null | transformers | 7,433 | ---
language: en
thumbnail: https://www.huggingtweets.com/footy_headlines/1606774412916/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/913057066243231744/3pa5pBzl_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Footy Headlines 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@footy_headlines bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@footy_headlines's tweets](https://twitter.com/footy_headlines).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3215</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>20</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>505</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2690</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/35awxvyw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @footy_headlines's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1tc1ld77) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1tc1ld77/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/footy_headlines'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/visualizevalue | 94506966acee36155d2386888bfd4ba3e47625f2 | 2021-05-23T04:00:21.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/visualizevalue | 27 | null | transformers | 7,434 | ---
language: en
thumbnail: https://www.huggingtweets.com/visualizevalue/1601837796274/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1287562748562309122/4RLk5A_U_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Visualize Value 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@visualizevalue bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@visualizevalue's tweets](https://twitter.com/visualizevalue).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>1000</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>132</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>331</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>537</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/f2olvyds/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @visualizevalue's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/1rm01ie6) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/1rm01ie6/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/visualizevalue'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
ivanlau/wav2vec2-large-xls-r-300m-cantonese | 7410716ea687c66aeb39b9329f475c90686495ed | 2022-03-23T18:26:09.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"zh-HK",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ivanlau | null | ivanlau/wav2vec2-large-xls-r-300m-cantonese | 27 | 1 | transformers | 7,435 | ---
language:
- zh-HK
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- zh-HK
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Chinese_HongKong (Cantonese)
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: zh-HK
metrics:
- name: Test WER
type: wer
value: 0.8111349803079126
- name: Test CER
type: cer
value: 0.21962250882996914
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: zh-HK
metrics:
- name: Test WER
type: wer
value: 1.0
- name: Test CER
type: cer
value: 0.6160564326503191
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: zh-HK
metrics:
- name: Test WER with LM
type: wer
value: 0.8055853920515574
- name: Test CER with LM
type: cer
value: 0.21578686612008757
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: zh-HK
metrics:
- name: Test WER with LM
type: wer
value: 1.0012453300124533
- name: Test CER with LM
type: cer
value: 0.6153006382264025
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 61.55
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ZH-HK dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4848
- Wer: 0.8004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 1.0 | 183 | 47.8442 | 1.0 |
| No log | 2.0 | 366 | 6.3109 | 1.0 |
| 41.8902 | 3.0 | 549 | 6.2392 | 1.0 |
| 41.8902 | 4.0 | 732 | 5.9739 | 1.1123 |
| 41.8902 | 5.0 | 915 | 4.9014 | 1.9474 |
| 5.5817 | 6.0 | 1098 | 3.9892 | 1.0188 |
| 5.5817 | 7.0 | 1281 | 3.5080 | 1.0104 |
| 5.5817 | 8.0 | 1464 | 3.0797 | 0.9905 |
| 3.5579 | 9.0 | 1647 | 2.8111 | 0.9836 |
| 3.5579 | 10.0 | 1830 | 2.6726 | 0.9815 |
| 2.7771 | 11.0 | 2013 | 2.7177 | 0.9809 |
| 2.7771 | 12.0 | 2196 | 2.3582 | 0.9692 |
| 2.7771 | 13.0 | 2379 | 2.1708 | 0.9757 |
| 2.3488 | 14.0 | 2562 | 2.0491 | 0.9526 |
| 2.3488 | 15.0 | 2745 | 1.8518 | 0.9378 |
| 2.3488 | 16.0 | 2928 | 1.6845 | 0.9286 |
| 1.7859 | 17.0 | 3111 | 1.6412 | 0.9280 |
| 1.7859 | 18.0 | 3294 | 1.5488 | 0.9035 |
| 1.7859 | 19.0 | 3477 | 1.4546 | 0.9010 |
| 1.3898 | 20.0 | 3660 | 1.5147 | 0.9201 |
| 1.3898 | 21.0 | 3843 | 1.4467 | 0.8959 |
| 1.1291 | 22.0 | 4026 | 1.4743 | 0.9035 |
| 1.1291 | 23.0 | 4209 | 1.3827 | 0.8762 |
| 1.1291 | 24.0 | 4392 | 1.3437 | 0.8792 |
| 0.8993 | 25.0 | 4575 | 1.2895 | 0.8577 |
| 0.8993 | 26.0 | 4758 | 1.2928 | 0.8558 |
| 0.8993 | 27.0 | 4941 | 1.2947 | 0.9163 |
| 0.6298 | 28.0 | 5124 | 1.3151 | 0.8738 |
| 0.6298 | 29.0 | 5307 | 1.2972 | 0.8514 |
| 0.6298 | 30.0 | 5490 | 1.3030 | 0.8432 |
| 0.4757 | 31.0 | 5673 | 1.3264 | 0.8364 |
| 0.4757 | 32.0 | 5856 | 1.3131 | 0.8421 |
| 0.3735 | 33.0 | 6039 | 1.3457 | 0.8588 |
| 0.3735 | 34.0 | 6222 | 1.3450 | 0.8473 |
| 0.3735 | 35.0 | 6405 | 1.3452 | 0.9218 |
| 0.3253 | 36.0 | 6588 | 1.3754 | 0.8397 |
| 0.3253 | 37.0 | 6771 | 1.3554 | 0.8353 |
| 0.3253 | 38.0 | 6954 | 1.3532 | 0.8312 |
| 0.2816 | 39.0 | 7137 | 1.3694 | 0.8345 |
| 0.2816 | 40.0 | 7320 | 1.3953 | 0.8296 |
| 0.2397 | 41.0 | 7503 | 1.3858 | 0.8293 |
| 0.2397 | 42.0 | 7686 | 1.3959 | 0.8402 |
| 0.2397 | 43.0 | 7869 | 1.4350 | 0.9318 |
| 0.2084 | 44.0 | 8052 | 1.4004 | 0.8806 |
| 0.2084 | 45.0 | 8235 | 1.3871 | 0.8255 |
| 0.2084 | 46.0 | 8418 | 1.4060 | 0.8252 |
| 0.1853 | 47.0 | 8601 | 1.3992 | 0.8501 |
| 0.1853 | 48.0 | 8784 | 1.4186 | 0.8252 |
| 0.1853 | 49.0 | 8967 | 1.4120 | 0.8165 |
| 0.1671 | 50.0 | 9150 | 1.4166 | 0.8214 |
| 0.1671 | 51.0 | 9333 | 1.4411 | 0.8501 |
| 0.1513 | 52.0 | 9516 | 1.4692 | 0.8394 |
| 0.1513 | 53.0 | 9699 | 1.4640 | 0.8391 |
| 0.1513 | 54.0 | 9882 | 1.4501 | 0.8419 |
| 0.133 | 55.0 | 10065 | 1.4134 | 0.8351 |
| 0.133 | 56.0 | 10248 | 1.4593 | 0.8405 |
| 0.133 | 57.0 | 10431 | 1.4560 | 0.8389 |
| 0.1198 | 58.0 | 10614 | 1.4734 | 0.8334 |
| 0.1198 | 59.0 | 10797 | 1.4649 | 0.8318 |
| 0.1198 | 60.0 | 10980 | 1.4659 | 0.8100 |
| 0.1109 | 61.0 | 11163 | 1.4784 | 0.8119 |
| 0.1109 | 62.0 | 11346 | 1.4938 | 0.8149 |
| 0.1063 | 63.0 | 11529 | 1.5050 | 0.8152 |
| 0.1063 | 64.0 | 11712 | 1.4773 | 0.8176 |
| 0.1063 | 65.0 | 11895 | 1.4836 | 0.8261 |
| 0.0966 | 66.0 | 12078 | 1.4979 | 0.8157 |
| 0.0966 | 67.0 | 12261 | 1.4603 | 0.8048 |
| 0.0966 | 68.0 | 12444 | 1.4803 | 0.8127 |
| 0.0867 | 69.0 | 12627 | 1.4974 | 0.8130 |
| 0.0867 | 70.0 | 12810 | 1.4721 | 0.8078 |
| 0.0867 | 71.0 | 12993 | 1.4644 | 0.8192 |
| 0.0827 | 72.0 | 13176 | 1.4835 | 0.8138 |
| 0.0827 | 73.0 | 13359 | 1.4934 | 0.8122 |
| 0.0734 | 74.0 | 13542 | 1.4951 | 0.8062 |
| 0.0734 | 75.0 | 13725 | 1.4908 | 0.8070 |
| 0.0734 | 76.0 | 13908 | 1.4876 | 0.8124 |
| 0.0664 | 77.0 | 14091 | 1.4934 | 0.8053 |
| 0.0664 | 78.0 | 14274 | 1.4603 | 0.8048 |
| 0.0664 | 79.0 | 14457 | 1.4732 | 0.8073 |
| 0.0602 | 80.0 | 14640 | 1.4925 | 0.8078 |
| 0.0602 | 81.0 | 14823 | 1.4812 | 0.8064 |
| 0.057 | 82.0 | 15006 | 1.4950 | 0.8013 |
| 0.057 | 83.0 | 15189 | 1.4785 | 0.8056 |
| 0.057 | 84.0 | 15372 | 1.4856 | 0.7993 |
| 0.0517 | 85.0 | 15555 | 1.4755 | 0.8034 |
| 0.0517 | 86.0 | 15738 | 1.4813 | 0.8034 |
| 0.0517 | 87.0 | 15921 | 1.4966 | 0.8048 |
| 0.0468 | 88.0 | 16104 | 1.4883 | 0.8002 |
| 0.0468 | 89.0 | 16287 | 1.4746 | 0.8023 |
| 0.0468 | 90.0 | 16470 | 1.4697 | 0.7974 |
| 0.0426 | 91.0 | 16653 | 1.4775 | 0.8004 |
| 0.0426 | 92.0 | 16836 | 1.4852 | 0.8023 |
| 0.0387 | 93.0 | 17019 | 1.4868 | 0.8004 |
| 0.0387 | 94.0 | 17202 | 1.4785 | 0.8021 |
| 0.0387 | 95.0 | 17385 | 1.4892 | 0.8015 |
| 0.0359 | 96.0 | 17568 | 1.4862 | 0.8018 |
| 0.0359 | 97.0 | 17751 | 1.4851 | 0.8007 |
| 0.0359 | 98.0 | 17934 | 1.4846 | 0.7999 |
| 0.0347 | 99.0 | 18117 | 1.4852 | 0.7993 |
| 0.0347 | 100.0 | 18300 | 1.4848 | 0.8004 |
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id ivanlau/wav2vec2-large-xls-r-300m-cantonese --dataset mozilla-foundation/common_voice_8_0 --config zh-HK --split test --log_outputs
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id ivanlau/wav2vec2-large-xls-r-300m-cantonese --dataset speech-recognition-community-v2/dev_data --config zh-HK --split validation --chunk_length_s 5.0 --stride_length_s 1.0 --log_outputs
```
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
kuppuluri/telugu_bertu_pos | 6013732101026333f2622a09fc9cf50d9ff86669 | 2021-12-02T18:15:36.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | kuppuluri | null | kuppuluri/telugu_bertu_pos | 27 | null | transformers | 7,436 | # Part of Speech tagging Model for Telugu
#### How to use
Use the below script from your python terminal as the web interface for inference has few encoding issues for Telugu
PS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models.
```python
from simpletransformers.ner import NERModel
model = NERModel('bert',
'kuppuluri/telugu_bertu_pos',
args={"use_multiprocessing": False},
labels=[
'QC', 'JJ', 'NN', 'QF', 'RDP', 'O',
'NNO', 'PRP', 'RP', 'VM', 'WQ',
'PSP', 'UT', 'CC', 'INTF', 'SYMP',
'NNP', 'INJ', 'SYM', 'CL', 'QO',
'DEM', 'RB', 'NST', ],
use_cuda=False)
text = "విరాట్ కోహ్లీ కూడా అదే నిర్లక్ష్యాన్ని ప్రదర్శించి కేవలం ఒక పరుగుకే రనౌటై పెవిలియన్ చేరాడు ."
results = model.predict([text])
```
## Training data
Training data is from https://github.com/anikethjr/NER_Telugu
## Eval results
On the test set my results were
eval_loss = 0.0036797842364565416
f1_score = 0.9983795127912227
precision = 0.9984325602401637
recall = 0.9983264709788816
|
liam168/chat-DialoGPT-small-en | 6bbc984a0d393397284e0fa9981fbfe8ff5f32e9 | 2021-08-03T10:25:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"license:apache-2.0"
] | text-generation | false | liam168 | null | liam168/chat-DialoGPT-small-en | 27 | null | transformers | 7,437 | ---
language: en
widget:
- text: "I got a surprise for you, Morty."
license: apache-2.0
---
# liam168/chat-DialoGPT-small-en
## Model description
用英文聊天数据训练的模型;
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
mode_name = 'liam168/chat-DialoGPT-small-en'
tokenizer = AutoTokenizer.from_pretrained(mode_name)
model = AutoModelForCausalLM.from_pretrained(mode_name)
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("Answer: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
ml6team/gpt2-small-dutch-finetune-oscar | 5fc680102b653316458392529a84c38f547a2840 | 2021-05-23T09:47:18.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"nl",
"transformers",
"adaption",
"recycled",
"gpt2-small"
] | text-generation | false | ml6team | null | ml6team/gpt2-small-dutch-finetune-oscar | 27 | 6 | transformers | 7,438 | ---
language: nl
widget:
- text: "De regering heeft beslist dat"
tags:
- adaption
- recycled
- gpt2-small
pipeline_tag: text-generation
---
# Dutch finetuned GPT2
|
mmcquade11/reviews-sentiment-analysis-two | da35de328541eb143c83f51edb901609d84f6d61 | 2021-12-02T17:31:19.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | mmcquade11 | null | mmcquade11/reviews-sentiment-analysis-two | 27 | null | transformers | 7,439 | Entry not found |
mmm-da/anekdot_funny1_rugpt3Small | 3ea216a3b11bdedf33dac080a455de7190766e66 | 2021-05-23T09:49:50.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | mmm-da | null | mmm-da/anekdot_funny1_rugpt3Small | 27 | null | transformers | 7,440 | Entry not found |
murathankurfali/bert-large-uncased-pdtb2-explicit-four-way | 789a54af5f0f25c086b5cdc311de6ec57c7ce902 | 2021-07-01T19:47:49.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | murathankurfali | null | murathankurfali/bert-large-uncased-pdtb2-explicit-four-way | 27 | null | transformers | 7,441 | Entry not found |
nateraw/timm-resnet50-beans | 5fab928ecf08198e592f4b893465eae8dcbe0230 | 2021-09-07T17:21:50.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | nateraw | null | nateraw/timm-resnet50-beans | 27 | 1 | timm | 7,442 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for `timm-resnet50-beans`
**TODO**
**For now, try dragging and dropping this image into the inference widget. It should classify as angular_leaf_spot.**

|
navteca/quora-roberta-large | 6c13fabe049c2f14a94e56a588523036e4680a14 | 2021-03-10T14:57:04.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"en",
"dataset:quora",
"transformers",
"license:mit"
] | text-classification | false | navteca | null | navteca/quora-roberta-large | 27 | null | transformers | 7,443 | ---
datasets:
- quora
language: en
license: mit
pipeline_tag: text-classification
tags:
- roberta
- text-classification
---
# Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
This model uses [roberta-large](https://huggingface.co/roberta-large).
## Training Data
This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset.
The model will predict a score between 0 and 1: How likely the two given questions are duplicates.
Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates.
## Usage and Performance
The trained model can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')])
print(scores)
```
|
peril10/Pypinion | 060f2f2ed8cd5b9c8850079d5a9bfba7cbc52267 | 2021-05-20T19:26:01.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | peril10 | null | peril10/Pypinion | 27 | null | transformers | 7,444 | Entry not found |
persiannlp/mt5-large-parsinlu-squad-reading-comprehension | 4563f098fcd8bd51bc25bf6b6a6a8bf77b62be62 | 2021-09-23T16:20:26.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:squad",
"transformers",
"reading-comprehension",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | persiannlp | null | persiannlp/mt5-large-parsinlu-squad-reading-comprehension | 27 | null | transformers | 7,445 |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- reading-comprehension
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- squad
metrics:
- f1
---
# Reading Comprehension (مدل برای پاسخ به درک مطلب)
This is a mT5-based model for reading comprehension.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "large"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-squad-reading-comprehension"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(paragraph, question, **generator_args):
input_ids = tokenizer.encode(question + "\n" + paragraph, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model(
"یک شی را دارای تقارن مینامیم زمانی که ان شی را بتوان به دو یا چند قسمت تقسیم کرد که آنها قسمتی از یک طرح سازمان یافته باشند یعنی بر روی شکل تنها جابجایی و چرخش و بازتاب و تجانس انجام شود و در اصل شکل تغییری به وجود نیایید آنگاه ان را تقارن مینامیم مرکز تقارن:اگر در یک شکل نقطهای مانندA وجود داشته باشد که هر نقطهٔ روی شکل (محیط) نسبت به نقطه یAمتقارن یک نقطهٔ دیگر شکل (محیط) باشد، نقطهٔ Aمرکز تقارن است. یعنی هر نقطه روی شکل باید متقارنی داشته باشد شکلهای که منتظم هستند و زوج ضلع دارند دارای مرکز تقارند ولی شکلهای فرد ضلعی منتظم مرکز تقارن ندارند. متوازیالأضلاع و دایره یک مرکز تقارن دارند ممکن است یک شکل خط تقارن نداشته باشد ولی مرکز تقارن داشته باشد. (منبع:س. گ)",
"اشکالی که یک مرکز تقارن دارند"
)
run_model(
"شُتُر یا اُشتر را که در زبان پهلوی (ushtar)[نیازمند منبع] میگفتند حیوانی است نیرومند و تنومند با توش و توان بالا از خانواده شتران؛ شبه نشخوارکننده و با دست و گردنی دراز. بر پشت خود یک یا دو کوهان دارد که ساختارش از پیه و چربی است. در دین اسلام گوشت او حلال است. اما ذبح آن با دیگر جانوران حلال گوشت متفاوت است و آن را نحر (بریدن گلو) میکنند و اگر سر آن را مانند گوسفند پیش از نحر ببرند گوشت آن حلال نیست. شیرش نیز نوشیده میشود ولی بیشتر کاربرد بارکشی دارد. پشم و پوستش نیز برای ریسندگی و پارچهبافی و کفشدوزی کاربرد دارد. گونههای دیگری از شتران نیز در آمریکای جنوبی زندگی میکنند، به نامهای لاما، آلپاکا، گواناکو که دارای کوهان نیستند. شتر ویژگیهای خاصّی دارد که مهمترین آنها تحمّل شرایط سخت صحرا و دماهای گوناگون و بهویژه گرمای شدید تابستان و کمبود آب و علوفه است. ترکیب جسمانی شتر با دیگر جانوران اختلاف زیادی دارد، و این اختلاف انگیزه شده که شتر در درازا روزهای سال در بیابان زندگی کند و از بوتهها و درختچههای گوناگون صحرایی و کویری و حتی از بوتههای شور و خاردار تغذیه کند. عربها از زمانهای بسیار دور از شتر استفاده کرده و میکنند. آنها به این حیوان اهلی لقب کشتی صحرا (به عربی: سفینةالصحراء) دادهاند.",
"غذای شترچیست؟"
)
run_model(
"""حسین میرزایی میگوید مرحله اول پرداخت وام حمایتی کرونا به همگی خانوارهای یارانهبگیر متقاضی تکمیل شده است و حال چهار میلیون خانوار که به عنوان "اقشار خاص" و "آسیبپذیر" شناسایی شدند، میتوانند برای یک میلیون تومان وام دیگر درخواست بدهند. آقای میرزایی گفته خانوارهای "آسیبپذیر" که شرایط گرفتن وام یک میلیونی اضافی را دارند با پیامک از این امکان مطلع شدهاند. بنا به گزارشهای رسمی با شیوع کرونا در ایران یک میلیون نفر بیکار شدهاند و درآمد کارکنان مشاغل غیررسمی نیز ضربه قابل توجهی خورده است. ارزش ریال هم در هفتههای اخیر در برابر ارزهای خارجی سقوط کرده است. اقتصاد ایران پیش از شیوع کرونا نیز با مشکلات مزمن رکود، تورم، تحریم و فساد روبرو بود.""",
"وام یارانه به چه کسانی میدهند؟"
)
run_model(
"در ۲۲ ژوئن ۱۹۴۱ نیروهای محور در عملیات بارباروسا حمله سنگینی به اتحاد شوروی کرده و یکی از بزرگترین نبردهای زمینی تاریخ بشر را رقم زدند. همچنین جبهه شرقی باعث به دام افتادن نیروهای محور شد و بیش از همه ارتش آلمان نازی را درگیر جنگ فرسایشی کرد. در دسامبر ۱۹۴۱ ژاپن یک در عملیاتی ناگهانی با نام نبرد پرل هاربر به پایگاه دریایی ایالات متحده آمریکا حمله کرد. به دنبال این اتفاق آمریکا نیز بلافاصله علیه ژاپن اعلان جنگ کرد که با حمایت بریتانیا همراه شد. پس از آن متحدین (نیروهای محور در اروپا) نیز با اتحاد ژاپن علیه آمریکا اعلام جنگ کردند. دستآوردهای ژاپن در یورش به آمریکا باعث ایجاد این احساس در آسیا شد که آسیا از تسلط غرب خارج شدهاست از این رو بسیاری از ارتشهای شکست خورده با آنها همراهی کردند.",
"چرا امریکا وارد جنگ جهانی دوم شد؟"
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/ |
rsedlr/RickBot | c7fad89f497874b323bd18131aa8f864574a3874 | 2021-08-12T08:26:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | rsedlr | null | rsedlr/RickBot | 27 | 2 | transformers | 7,446 | ---
tags:
- conversational
---
# DialoGPT-small model trained on dialogue from Rick and Morty
### [Chat to me on Chai!](https://chai.ml/chat/share/_bot_de374c84-9598-4848-996b-736d0cc02f6b)
Make your own Rick bot [here](https://colab.research.google.com/drive/1o5LxBspm-C28HQvXN-PRQavapDbm5WjG?usp=sharing) |
s3h/gec-token-classification-arabert-v2 | 0fa4a65524bb1a23ba8463fd73a492c90789d090 | 2022-01-05T20:12:34.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | s3h | null | s3h/gec-token-classification-arabert-v2 | 27 | null | transformers | 7,447 | Entry not found |
sammy786/wav2vec2-xlsr-dhivehi | 14770c37461b4ffdba3e95b2f2f83d67d414e3af | 2022-03-24T11:58:38.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dv",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sammy786 | null | sammy786/wav2vec2-xlsr-dhivehi | 27 | null | transformers | 7,448 | ---
language:
- dv
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- dv
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-dhivehi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: dv
metrics:
- name: Test WER
type: wer
value: 26.91
- name: Test CER
type: cer
value: 4.02
---
# sammy786/wav2vec2-xlsr-dhivehi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - dv dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 14.86
- Wer: 29.32
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 8
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|-------|---------------|-----------------|----------|
| 200 | 4.883800 | 3.190218 | 1.000000 |
| 400 | 1.600100 | 0.497887 | 0.726159 |
| 600 | 0.928500 | 0.358781 | 0.603892 |
| 800 | 0.867900 | 0.309132 | 0.570786 |
| 1000 | 0.743100 | 0.309116 | 0.552954 |
| 1200 | 0.725100 | 0.266839 | 0.538378 |
| 1400 | 0.786200 | 0.259797 | 0.535897 |
| 1600 | 0.655700 | 0.245691 | 0.517290 |
| 1800 | 0.650500 | 0.246957 | 0.516204 |
| 2000 | 0.685500 | 0.234808 | 0.516204 |
| 2200 | 0.487100 | 0.228409 | 0.507753 |
| 2400 | 0.401300 | 0.221087 | 0.495968 |
| 2600 | 0.359300 | 0.212476 | 0.489301 |
| 2800 | 0.347300 | 0.204848 | 0.487750 |
| 3000 | 0.327000 | 0.203163 | 0.478756 |
| 3200 | 0.337100 | 0.210235 | 0.487595 |
| 3400 | 0.308900 | 0.201471 | 0.491316 |
| 3600 | 0.292600 | 0.192437 | 0.476120 |
| 3800 | 0.289600 | 0.198398 | 0.468445 |
| 4000 | 0.290200 | 0.193484 | 0.467204 |
| 4200 | 0.272600 | 0.193999 | 0.470150 |
| 4400 | 0.266700 | 0.187384 | 0.460769 |
| 4600 | 0.253800 | 0.187279 | 0.476663 |
| 4800 | 0.266400 | 0.197395 | 0.466817 |
| 5000 | 0.258000 | 0.188920 | 0.456660 |
| 5200 | 0.237200 | 0.180770 | 0.457358 |
| 5400 | 0.237900 | 0.178149 | 0.448287 |
| 5600 | 0.232600 | 0.179827 | 0.461002 |
| 5800 | 0.228500 | 0.182142 | 0.445185 |
| 6000 | 0.221000 | 0.173619 | 0.440688 |
| 6200 | 0.219500 | 0.172291 | 0.442859 |
| 6400 | 0.219400 | 0.173339 | 0.430609 |
| 6600 | 0.201900 | 0.177552 | 0.426423 |
| 6800 | 0.199000 | 0.173157 | 0.429834 |
| 7000 | 0.200000 | 0.166503 | 0.423709 |
| 7200 | 0.194600 | 0.171812 | 0.429834 |
| 7400 | 0.192100 | 0.164989 | 0.420530 |
| 7600 | 0.185000 | 0.168355 | 0.418825 |
| 7800 | 0.175100 | 0.168128 | 0.419290 |
| 8000 | 0.173500 | 0.167959 | 0.424950 |
| 8200 | 0.172200 | 0.173643 | 0.414793 |
| 8400 | 0.164200 | 0.167020 | 0.406342 |
| 8600 | 0.170800 | 0.168050 | 0.405334 |
| 8800 | 0.157900 | 0.164290 | 0.396573 |
| 9000 | 0.159900 | 0.163188 | 0.397426 |
| 9200 | 0.151700 | 0.164370 | 0.390991 |
| 9400 | 0.146600 | 0.165053 | 0.392852 |
| 9600 | 0.142200 | 0.164939 | 0.391844 |
| 9800 | 0.148300 | 0.164422 | 0.385719 |
| 10000 | 0.136200 | 0.166569 | 0.385951 |
| 10200 | 0.140700 | 0.161377 | 0.379594 |
| 10400 | 0.133300 | 0.165194 | 0.378276 |
| 10600 | 0.131300 | 0.164328 | 0.369205 |
| 10800 | 0.135500 | 0.160254 | 0.373236 |
| 11000 | 0.121100 | 0.163522 | 0.372693 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-dhivehi --dataset mozilla-foundation/common_voice_8_0 --config dv --split test
``` |
speechbrain/REAL-M-sisnr-estimator | 7308f2f4d0390ee68a31be850d685a323e891b01 | 2021-11-03T21:32:48.000Z | [
"en",
"dataset:REAL-M",
"dataset:WHAMR!",
"arxiv:2110.10812",
"arxiv:2106.04624",
"speechbrain",
"audio-source-separation",
"Source Separation",
"Speech Separation",
"WHAM!",
"REAL-M",
"SepFormer",
"Transformer",
"pytorch",
"license:apache-2.0"
] | null | false | speechbrain | null | speechbrain/REAL-M-sisnr-estimator | 27 | 1 | speechbrain | 7,449 | ---
language: "en"
thumbnail:
tags:
- audio-source-separation
- Source Separation
- Speech Separation
- WHAM!
- REAL-M
- SepFormer
- Transformer
- pytorch
- speechbrain
license: "apache-2.0"
datasets:
- REAL-M
- WHAMR!
metrics:
- SI-SNRi
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Neural SI-SNR Estimator
The Neural SI-SNR Estimator predicts the scale-invariant signal-to-noise ratio (SI-SNR) from the separated signals and the original mixture.
The performance estimation is blind (i.e., no targets signals are needed). This model allows a performance estimation on real mixtures, where the targets are not available.
This repository provides the SI-SNR estimator model introduced for the REAL-M dataset.
The REAL-M dataset can downloaded from [this link](https://sourceseparationresearch.com/static/REAL-M-v0.1.0.tar.gz).
The paper for the REAL-M dataset can be found on [this arxiv link](https://arxiv.org/pdf/2110.10812.pdf).
| Release | Test-Set (WHAMR!) average l1 error |
|:---:|:---:|
| 18-10-21 | 1.7 dB |
## Install SpeechBrain
First of all, currently you need to install SpeechBrain from the source:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io).
### Minimal example for SI-SNR estimation
```python
from speechbrain.pretrained import SepformerSeparation as separator
from speechbrain.pretrained.interfaces import fetch
from speechbrain.pretrained.interfaces import SNREstimator as snrest
import torchaudio
# 1- Download a test mixture
fetch("test_mixture.wav", source="speechbrain/sepformer-wsj02mix", savedir=".", save_filename="test_mixture.wav")
# 2- Separate the mixture with a pretrained model (sepformer-whamr in this case)
model = separator.from_hparams(source="speechbrain/sepformer-whamr", savedir='pretrained_models/sepformer-whamr')
est_sources = model.separate_file(path='test_mixture.wav')
# 3- Estimate the performance
snr_est_model = snrest.from_hparams(source="speechbrain/REAL-M-sisnr-estimator",savedir='pretrained_models/REAL-M-sisnr-estimator')
mix, fs = torchaudio.load('test_mixture.wav')
snrhat = snr_est_model.estimate_batch(mix, est_sources)
print(snrhat) # Estimates are in dB / 10 (in the range 0-1, e.g., 0 --> 0dB, 1 --> 10dB)
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain (fc2eabb7).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/REAL-M/sisnr-estimation
python train.py hparams/pool_sisnrestimator.yaml --data_folder /yourLibri2Mixpath --base_folder_dm /yourLibriSpeechpath --rir_path /yourpathforwhamrRIRs --dynamic_mixing True --use_whamr_train True --whamr_data_folder /yourpath/whamr --base_folder_dm_whamr /yourpath/wsj0-processed/si_tr_s
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1NGncbjvLeGfbUqmVi6ej-NH9YQn5vBmI).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing SpeechBrain
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
#### Referencing REAL-M
```bibtex
@misc{subakan2021realm,
title={REAL-M: Towards Speech Separation on Real Mixtures},
author={Cem Subakan and Mirco Ravanelli and Samuele Cornell and François Grondin},
year={2021},
eprint={2110.10812},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
```
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/ |
sultan/BioM-ALBERT-xxlarge-SQuAD2 | 71d586c571c68eaad6e1c994b557a2b1643f7e1d | 2021-08-10T21:59:59.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | sultan | null | sultan/BioM-ALBERT-xxlarge-SQuAD2 | 27 | null | transformers | 7,450 | # BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA
# Abstract
The impact of design choices on the performance
of biomedical language models recently
has been a subject for investigation. In
this paper, we empirically study biomedical
domain adaptation with large transformer models
using different design choices. We evaluate
the performance of our pretrained models
against other existing biomedical language
models in the literature. Our results show that
we achieve state-of-the-art results on several
biomedical domain tasks despite using similar
or less computational cost compared to other
models in the literature. Our findings highlight
the significant effect of design choices on
improving the performance of biomedical language
models.
# Model Description
This model is fine-tuned on the SQuAD2.0 dataset. Fine-tuning the biomedical language model on the SQuAD dataset helps improve the score on the BioASQ challenge. If you plan to work with BioASQ or biomedical QA tasks, it's better to use this model over BioM-ALBERT-xxlarge. This model (TensorFlow version ) took the lead in the BioASQ9b-Factoid challenge under the name of (UDEL-LAB1).
If you want to try our Tensor Flow example and how to fine-tune ALBERT on SQuAD and BioASQ follow this link :
https://github.com/salrowili/BioM-Transformers/blob/main/examples/Example_of_SQuAD2_0_and_BioASQ7B_tasks_with_BioM_ALBERT_xxlarge_on_TPU.ipynb
To see the full details of BioASQ9B results, please check this link http://participants-area.bioasq.org/results/9b/phaseB/ ( you need to register).
Huggingface library doesn't implement the Layer-Wise decay feature, which affects the performance on the SQuAD task. The reported result of BioM-ALBERT-xxlarge-SQuAD in our paper is 87.00 (F1) since we use ALBERT open-source code with TF checkpoint, which uses Layer-Wise decay.
Result with PyTorch and V100 GPU
```
***** eval metrics *****
HasAns_exact = 77.6484
HasAns_f1 = 85.0136
HasAns_total = 5928
NoAns_exact = 86.577
NoAns_f1 = 86.577
NoAns_total = 5945
best_exact = 82.1191
best_exact_thresh = 0.0
best_f1 = 85.7964
best_f1_thresh = 0.0
eval_samples = 12551
exact = 82.1191
f1 = 85.7964
total = 11873
```
To reproduce results in Google Colab:
- Make sure you have GPU enabled.
- Clone and install required libraries through this code
!git clone https://github.com/huggingface/transformers
!pip3 install -e transformers
!pip3 install sentencepiece
!pip3 install -r /content/transformers/examples/pytorch/question-answering/requirements.txt
- Run this python code:
```python
python /content/transformers/examples/pytorch/question-answering/run_qa.py --model_name_or_path BioM-ALBERT-xxlarge-SQuAD2 \
--do_eval \
--version_2_with_negative \
--per_device_eval_batch_size 8 \
--dataset_name squad_v2 \
--overwrite_output_dir \
--fp16 \
--output_dir out
```
You don't need to download the SQuAD2 dataset. The code will download it from the HuggingFace datasets hub.
Check our GitHub repo at https://github.com/salrowili/BioM-Transformers for TensorFlow and GluonNLP checkpoints.
# Acknowledgment
We would like to acknowledge the support we have from Tensorflow Research Cloud (TFRC) team to grant us access to TPUv3 units.
# Citation
```bibtex
@inproceedings{alrowili-shanker-2021-biom,
title = "{B}io{M}-Transformers: Building Large Biomedical Language Models with {BERT}, {ALBERT} and {ELECTRA}",
author = "Alrowili, Sultan and
Shanker, Vijay",
booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bionlp-1.24",
pages = "221--227",
abstract = "The impact of design choices on the performance of biomedical language models recently has been a subject for investigation. In this paper, we empirically study biomedical domain adaptation with large transformer models using different design choices. We evaluate the performance of our pretrained models against other existing biomedical language models in the literature. Our results show that we achieve state-of-the-art results on several biomedical domain tasks despite using similar or less computational cost compared to other models in the literature. Our findings highlight the significant effect of design choices on improving the performance of biomedical language models.",
}
``` |
sunhao666/chi-sum2 | 64c440c9492feeab310f49034427c35da46c209a | 2021-05-20T04:01:09.000Z | [
"pytorch",
"t5",
"feature-extraction",
"transformers"
] | feature-extraction | false | sunhao666 | null | sunhao666/chi-sum2 | 27 | null | transformers | 7,451 | Entry not found |
testing/autonlp-ingredient_sentiment_analysis-19126711 | 0e0b457a8d5a22c1801d966612513a68de076390 | 2021-11-04T15:54:28.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:testing/autonlp-data-ingredient_sentiment_analysis",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | token-classification | false | testing | null | testing/autonlp-ingredient_sentiment_analysis-19126711 | 27 | null | transformers | 7,452 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- testing/autonlp-data-ingredient_sentiment_analysis
co2_eq_emissions: 1.8458289701133035
---
# Model Trained Using AutoNLP
- Problem type: Entity Extraction
- Model ID: 19126711
- CO2 Emissions (in grams): 1.8458289701133035
## Validation Metrics
- Loss: 0.054593171924352646
- Accuracy: 0.9790668170284748
- Precision: 0.8029411764705883
- Recall: 0.6026490066225165
- F1: 0.6885245901639344
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/testing/autonlp-ingredient_sentiment_analysis-19126711
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("testing/autonlp-ingredient_sentiment_analysis-19126711", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("testing/autonlp-ingredient_sentiment_analysis-19126711", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
textattack/roberta-base-rotten_tomatoes | 6cc7e32fb4fd5113a9b164cf045bda1fbb5c847f | 2021-05-20T22:18:23.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | textattack | null | textattack/roberta-base-rotten_tomatoes | 27 | null | transformers | 7,453 | ## roberta-base fine-tuned with TextAttack on the rotten_tomatoes dataset
This `roberta-base` model was fine-tuned for sequence classificationusing TextAttack
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned
for 10 epochs with a batch size of 128, a learning
rate of 5e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9033771106941839, as measured by the
eval set accuracy, found after 9 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/xlnet-base-cased-WNLI | 7ae8ccdb868bfe9abbf9a558ab6f583145f4afd6 | 2020-07-06T16:34:15.000Z | [
"pytorch",
"xlnet",
"text-generation",
"transformers"
] | text-generation | false | textattack | null | textattack/xlnet-base-cased-WNLI | 27 | null | transformers | 7,454 | ## TextAttack Model Card
This `xlnet-base-cased` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 3e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.5774647887323944, as measured by the
eval set accuracy, found after 0 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
mitiku/AmharicWICPostag | 8af009903b374642b1816ba76922ada07fa760d2 | 2022-03-20T10:10:58.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | mitiku | null | mitiku/AmharicWICPostag | 27 | null | transformers | 7,455 | ---
tags:
- generated_from_trainer
model-index:
- name: AmharicWICPostag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AmharicWICPostag
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
abidlabs/speech-text | 7f0faf15157695f3878372ae93381ae9c24ab662 | 2022-03-23T18:33:30.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_6_0",
"transformers",
"audio",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | abidlabs | null | abidlabs/speech-text | 27 | null | transformers | 7,456 | ---
language: en
datasets:
- common_voice
- mozilla-foundation/common_voice_6_0
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- en
- hf-asr-leaderboard
- mozilla-foundation/common_voice_6_0
- robust-speech-event
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 English by Jonatas Grosman
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice en
type: common_voice
args: en
metrics:
- name: Test WER
type: wer
value: 19.06
- name: Test CER
type: cer
value: 7.69
- name: Test WER (+LM)
type: wer
value: 14.81
- name: Test CER (+LM)
type: cer
value: 6.84
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: en
metrics:
- name: Dev WER
type: wer
value: 27.72
- name: Dev CER
type: cer
value: 11.65
- name: Dev WER (+LM)
type: wer
value: 20.85
- name: Dev CER (+LM)
type: cer
value: 11.01
---
# Wav2Vec2-Large-XLSR-53-English
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on English using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-english")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "en"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-english"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| "SHE'LL BE ALL RIGHT." | SHE'LL BE ALL RIGHT |
| SIX | SIX |
| "ALL'S WELL THAT ENDS WELL." | ALL AS WELL THAT ENDS WELL |
| DO YOU MEAN IT? | DO YOU MEAN IT |
| THE NEW PATCH IS LESS INVASIVE THAN THE OLD ONE, BUT STILL CAUSES REGRESSIONS. | THE NEW PATCH IS LESS INVASIVE THAN THE OLD ONE BUT STILL CAUSES REGRESSION |
| HOW IS MOZILLA GOING TO HANDLE AMBIGUITIES LIKE QUEUE AND CUE? | HOW IS MOSLILLAR GOING TO HANDLE ANDBEWOOTH HIS LIKE Q AND Q |
| "I GUESS YOU MUST THINK I'M KINDA BATTY." | RUSTIAN WASTIN PAN ONTE BATTLY |
| NO ONE NEAR THE REMOTE MACHINE YOU COULD RING? | NO ONE NEAR THE REMOTE MACHINE YOU COULD RING |
| SAUCE FOR THE GOOSE IS SAUCE FOR THE GANDER. | SAUCE FOR THE GUICE IS SAUCE FOR THE GONDER |
| GROVES STARTED WRITING SONGS WHEN SHE WAS FOUR YEARS OLD. | GRAFS STARTED WRITING SONGS WHEN SHE WAS FOUR YEARS OLD |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-english --dataset mozilla-foundation/common_voice_6_0 --config en --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-english --dataset speech-recognition-community-v2/dev_data --config en --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021wav2vec2-large-xlsr-53-english,
title={XLSR Wav2Vec2 English by Jonatas Grosman},
author={Grosman, Jonatas},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english}},
year={2021}
}
``` |
Ensheng/graphcodebert-v1 | 99020eb25b0e7c08f757fc3747b6e013ebdd82fe | 2022-03-10T08:32:36.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Ensheng | null | Ensheng/graphcodebert-v1 | 27 | null | transformers | 7,457 | Entry not found |
ai4bharat/MultiIndicQuestionGenerationSS | 508601d8c29ba2b6165df2aca994863f0851320b | 2022-05-23T17:19:03.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"as",
"bn",
"gu",
"hi",
"kn",
"ml",
"mr",
"or",
"pa",
"ta",
"te",
"dataset:ai4bharat/IndicQuestionGeneration",
"dataset:squad",
"arxiv:2203.05437",
"transformers",
"question-generation",
"multilingual",
"nlp",
"indicnlp",
"autotrain_compatible"
] | text2text-generation | false | ai4bharat | null | ai4bharat/MultiIndicQuestionGenerationSS | 27 | 1 | transformers | 7,458 | ---
tags:
- question-generation
- multilingual
- nlp
- indicnlp
datasets:
- ai4bharat/IndicQuestionGeneration
- squad
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
licenses:
- cc-by-nc-4.0
---
# MultiIndicQuestionGenerationSS
MultiIndicQuestionGenerationSS is a multilingual, sequence-to-sequence pre-trained model, a [IndicBARTSS](https://huggingface.co/ai4bharat/IndicBARTSS) checkpoint fine-tuned on the 11 languages of [IndicQuestionGeneration](https://huggingface.co/datasets/ai4bharat/IndicQuestionGeneration) dataset. For fine-tuning details,
see the [paper](https://arxiv.org/abs/2203.05437). You can use MultiIndicQuestionGenerationSS to build question generation applications for Indian languages by fine-tuning the model with supervised training data for the question generation task. Some salient features of the MultiIndicQuestionGenerationSS are:
<ul>
<li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Oriya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5. </li>
<li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for finetuning and decoding. </li>
<li> Fine-tuned on large Indic language corpora (770 K examples). </li>
<li> Unlike ai4bharat/MultiIndicQuestionGenerationUnified, each language is written in its own script, so you do not need to perform any script mapping to/from Devanagari. </li>
</ul>
You can read more about MultiIndicQuestionGenerationSS in this <a href="https://arxiv.org/abs/2203.05437">paper</a>.
## Using this model in `transformers`
```
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicQuestionGenerationSS", do_lower_case=False, use_fast=False, keep_accents=True)
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicQuestionGenerationSS", do_lower_case=False, use_fast=False, keep_accents=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicQuestionGenerationSS")
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicQuestionGenerationSS")
# Some initial mapping
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
# To get lang_id use any of ['<2as>', '<2bn>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>']
# First tokenize the input and outputs. The format below is how IndicBARTSS was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
inp = tokenizer("7 फरवरी, 2016 [SEP] खेल 7 फरवरी, 2016 को कैलिफोर्निया के सांता क्लारा में सैन फ्रांसिस्को खाड़ी क्षेत्र में लेवी स्टेडियम में खेला गया था। </s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
out = tokenizer("<2hi> सुपर बाउल किस दिन खेला गया? </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])
# For loss
model_outputs.loss ## This is not label smoothed.
# For logits
model_outputs.logits
# For generation. Pardon the messiness. Note the decoder_start_token_id.
model.eval() # Set dropouts to zero
model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2hi>"))
# Decode to get output strings
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # कब होगा पहला एएफएल गेम?
```
## Benchmarks
Scores on the `IndicQuestionGeneration` test sets are as follows:
Language | RougeL
---------|----------------------------
as | 20.73
bn | 30.38
gu | 28.13
hi | 34.42
kn | 23.77
ml | 22.24
mr | 23.62
or | 27.53
pa | 32.53
ta | 23.49
te | 25.81
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
}
```
# License
The model is available under the MIT License. |
krinal214/xlm-all | 700921a0c6c3609e8cfbc94ace7728a4f4415bdb | 2022-03-16T13:01:05.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"dataset:tydiqa",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | krinal214 | null | krinal214/xlm-all | 27 | null | transformers | 7,459 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- tydiqa
model-index:
- name: xlm-all-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-all-final
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the tydiqa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4483 | 1.0 | 3381 | 0.6038 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
Visual-Attention-Network/van-tiny | dda753ad7f885157a796d5347318a2244c33e4f3 | 2022-03-31T12:45:47.000Z | [
"pytorch",
"van",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2202.09741",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | Visual-Attention-Network | null | Visual-Attention-Network/van-tiny | 27 | null | transformers | 7,460 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Van
Van model trained on imagenet-1k. It was introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [this repository](https://github.com/Visual-Attention-Network/VAN-Classification).
Disclaimer: The team releasing Van did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
This paper introduces a new attention layer based on convolution operations able to capture both local and distant relationships. This is done by combining normal and large kernel convolution layers. The latter uses a dilated convolution to capture distant correlations.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=van) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, VanForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("Visual-Attention-Network/van-base")
>>> model = VanForImageClassification.from_pretrained("Visual-Attention-Network/van-base")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
tabby, tabby cat
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/van). |
joangog/pwmfd-yolov5 | 2261475efc2ab2bd5193fc77a8b7f1e911e9d5de | 2022-07-10T12:16:29.000Z | [
"pytorch",
"tensorboard",
"en",
"dataset:pwmfd",
"transformers",
"yolov5"
] | null | false | joangog | null | joangog/pwmfd-yolov5 | 27 | 0 | transformers | 7,461 | ---
language:
- en
tags:
- pytorch
- yolov5
datasets:
- pwmfd
metrics:
- coco
---
Optimized YOLOv5 model trained on the PWMFD medical masks dataset using transfer learning from COCO with frozen backbone, data augmentations such as mosaic, and an input image size of 320 x 320.
**Architecture:** [here](https://huggingface.co/joangog/pwmfd-yolov5/tensorboard?scroll=1#graphs&run=.)
**AP:**
- Evaluation from pycocotools: **67%**
- Evaluation from yolov5 val.py script: **71%**
**fps**:
- Nvidia Geforce GTX960, 4 GB: **69 fps**
|
pere/multi-sentencefix-mt5-large | 150fbd580463db2664022f879ef6cf3ade1acb3e | 2022-06-08T17:06:33.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"no",
"transformers",
"translation",
"license:cc-by-4.0",
"autotrain_compatible"
] | translation | false | pere | null | pere/multi-sentencefix-mt5-large | 27 | 2 | transformers | 7,462 | ---
language: no
tags:
- translation
widget:
- text: "moscow says deployments in eastern europe increase tensions at the same time nato says russia has moved troops to belarus"
- text: "dette er en liten test som er laget av per egil kummervold han er en forsker som tidligere jobbet ved nasjonalbiblioteket"
- text: "tirsdag var travel for ukrainas president volodymyr zelenskyj på morgenen tok han imot polens statsminister mateusz morawiecki"
- text: "el presidente de estados unidos aprovecha su visita al país fronterizo con ucrania para reunirse con los ministros de defensa y exteriores en un encuentro con refugiados el mandatario calificó al líder ruso como carnicero "
license: cc-by-4.0
---
# DeUnCaser
The output from Automated Speak Recognition software is usually uncased and without any punctation. This does not make a very readable text.
The DeUnCaser is a sequence-to-sequence model that is reversing this process. It adds punctation, and capitalises the correct words. In some languages this means adding capital letters at start of sentences and on all proper nouns, in other languages, like German, it means capitalising the first letter of all nouns. It will also make attempts at adding hyphens and parentheses if this is making the meaning clearer.
It is using based on the multi-lingual T5 model. It is finetuned for 130,000 steps on a TPU v4-16 using T5X starting from the mT5.1.1 pretrained model. The finetuning scripts is based on up to 1,000,000 training examples (or as many as exists in OSCAR) from each of the 42 languages with Latin alphabet that is both part of OSCAR and the mT5 training set: Afrikaans, Albanian, Basque, Catalan, Cebuano, Czech, Danish, Dutch, English, Esperanto, Estonian, Finnish, French, Galician, German, Hungarian, Icelandic, Indonesian, Irish, Italian, Kurdish, Latin, Latvian, Lithuanian, Luxembourgish, Malagasy, Malay, Maltese, Norwegian Bokmål, Norwegian Nynorsk, Polish, Portuguese, Romanian, Slovak, Spanish, Swahili, Swedish, Turkish, Uzbek, Vietnamese, Welsh, West Frisian.
A Notebook for creating the training corpus is available [here](https://colab.research.google.com/drive/1bkH94z-0wIQP8Pz0qXFndhoQsokU-78x?usp=sharing). |
bipin/image-caption-generator | fb824de608c028d19bb71c4c43b335cab0f20219 | 2022-03-31T10:39:40.000Z | [
"pytorch",
"vision-encoder-decoder",
"transformers",
"image-captioning",
"image-to-text",
"model-index"
] | image-to-text | false | bipin | null | bipin/image-caption-generator | 27 | 2 | transformers | 7,463 | ---
tags:
- image-captioning
- image-to-text
model-index:
- name: image-caption-generator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image-caption-generator
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2536
- eval_runtime: 25.369
- eval_samples_per_second: 63.818
- eval_steps_per_second: 8.002
- epoch: 4.0
- step: 3236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
hackathon-pln-es/Detect-Acoso-Twitter-Es | 7a78841b3867be174e23a2bcac9e4cc3c393883c | 2022-03-30T23:56:25.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"es",
"dataset:hackathon-pln-es/Dataset-Acoso-Twitter-Es",
"transformers",
"generated_from_trainer",
"acoso",
"twitter",
"cyberbullying",
"license:apache-2.0",
"model-index"
] | text-classification | false | hackathon-pln-es | null | hackathon-pln-es/Detect-Acoso-Twitter-Es | 27 | 4 | transformers | 7,464 | ---
license: apache-2.0
language: "es"
tags:
- generated_from_trainer
- es
- text-classification
- acoso
- twitter
- cyberbullying
datasets:
- hackathon-pln-es/Dataset-Acoso-Twitter-Es
widget:
- text: "Que horrible como la farándula chilena siempre se encargaba de dejar mal a las mujeres. Un asco"
- text: "Hay que ser bien menestra para amenazar a una mujer con una llave de ruedas. Viendo como se viste no me queda ninguna duda"
- text: "más centrados en tener una sociedad reprimida y sumisa que en estudiar y elaborar políticas de protección hacia las personas de mayor riesgo ante el virus."
metrics:
- accuracy
model-index:
- name: Detección de acoso en Twitter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Detección de acoso en Twitter Español
This model is a fine-tuned version of [mrm8488/distilroberta-finetuned-tweets-hate-speech](https://huggingface.co/mrm8488/distilroberta-finetuned-tweets-hate-speech) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1628
- Accuracy: 0.9167
# UNL: Universidad Nacional de Loja
## Miembros del equipo:
- Anderson Quizhpe <br>
- Luis Negrón <br>
- David Pacheco <br>
- Bryan Requenes <br>
- Paul Pasaca
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6732 | 1.0 | 27 | 0.3797 | 0.875 |
| 0.5537 | 2.0 | 54 | 0.3242 | 0.9167 |
| 0.5218 | 3.0 | 81 | 0.2879 | 0.9167 |
| 0.509 | 4.0 | 108 | 0.2606 | 0.9167 |
| 0.4196 | 5.0 | 135 | 0.1628 | 0.9167 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
McGill-NLP/bart-qg-mlquestions-backtraining | 84305dbb0141149fba691d6804e682c8be1d68ef | 2022-04-08T17:02:56.000Z | [
"pytorch",
"bart",
"text2text-generation",
"arxiv:1910.13461",
"arxiv:2104.08801",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | McGill-NLP | null | McGill-NLP/bart-qg-mlquestions-backtraining | 27 | null | transformers | 7,465 | ---
license: cc-by-4.0
---
# BART-base fine-tuned on NaturalQuestions for **Question Generation**
[BART Model](https://arxiv.org/pdf/1910.13461.pdf) trained for Question Generation in an unsupervised manner using [Back-Training](https://arxiv.org/pdf/2104.08801.pdf) algorithm (Kulshreshtha et al, EMNLP 2021). The dataset used are unaligned questions and passages from [MLQuestions dataset](https://github.com/McGill-NLP/MLQuestions/tree/main/data).
## Details of Back-Training
The Back-Training algorithm was presented in [Back-Training excels Self-Training at Unsupervised Domain Adaptation
of Question Generation and Passage Retrieval](https://arxiv.org/pdf/2104.08801.pdf) by *Devang Kulshreshtha, Robert Belfer, Iulian Vlad Serban, Siva Reddy* in Here the abstract:
In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA) from source to target domain. While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between the target domain and synthetic data distribution, and reduces model overfitting to the source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU4 points on generation, and 17.6% top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation datasetMLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.
## Model training 🏋️
The training script can be found [here](https://github.com/McGill-NLP/MLQuestions/blob/main/UDA-BackTraining.sh)
## Model in Action 🚀
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
#Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("geekydevu/bart-qg-mlquestions-backtraining")
#Load the model
model = AutoModelForSeq2SeqLM.from_pretrained("geekydevu/bart-qg-mlquestions-backtraining")
```
## Citation
If you want to cite this model you can use this:
```bibtex
@inproceedings{kulshreshtha-etal-2021-back,
title = "Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval",
author = "Kulshreshtha, Devang and
Belfer, Robert and
Serban, Iulian Vlad and
Reddy, Siva",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.566",
pages = "7064--7078",
abstract = "In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA). While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between target domain and synthetic data distribution, and reduces model overfitting to source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU-4 points on generation, and 17.6{\%} top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation dataset - MLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.",
}
```
> Created by [Devang Kulshreshtha](https://geekydevu.netlify.app/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
BFMeriem/chatbot-model | 85172e7e3adef5a2d85cfa2ec90c0a8e575c3f24 | 2022-04-18T05:16:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | BFMeriem | null | BFMeriem/chatbot-model | 27 | 1 | transformers | 7,466 | ---
tags:
- conversational
---
#Michael Scott Character Chatbot |
smeoni/nbme-deberta-V3-large | 0887786226e8e2afb85b4b220e906583040344e1 | 2022-04-19T14:22:48.000Z | [
"pytorch",
"deberta-v2",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | smeoni | null | smeoni/nbme-deberta-V3-large | 27 | null | transformers | 7,467 | Entry not found |
ELiRF/mt5-base-dacsa-ca | 378699aaf978689d242ba0140b16c953beda61ee | 2022-07-11T17:33:29.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"ca",
"arxiv:2010.11934",
"transformers",
"summarization",
"autotrain_compatible"
] | summarization | false | ELiRF | null | ELiRF/mt5-base-dacsa-ca | 27 | null | transformers | 7,468 | ---
language: ca
tags:
- summarization
widget:
- text: "La Universitat Politècnica de València (UPV), a través del projecte Atenea “plataforma de dones, art i tecnologia” i en col·laboració amb les companyies tecnològiques Metric Salad i Zetalab, ha digitalitzat i modelat en 3D per a la 35a edició del Festival Dansa València, que se celebra del 2 al 10 d'abril, la primera peça de dansa en un metaverso específic. La peça No és amor, dirigida per Lara Misó, forma part de la programació d'aquesta edició del Festival Dansa València i explora la figura geomètrica del cercle des de totes les seues perspectives: espacial, corporal i compositiva. No és amor està inspirada en el treball de l'artista japonesa Yayoi Kusama i mira de prop les diferents facetes d'una obsessió. Així dona cabuda a la insistència, la repetició, el trastorn, la hipnosi i l'alliberament. El procés de digitalització, materialitzat per Metric Salad i ZetaLab, ha sigut complex respecte a uns altres ja realitzats a causa de l'enorme desafiament que comporta el modelatge en 3D de cossos en moviment al ritme de la composició de l'obra. L'objectiu era generar una experiència el més realista possible i fidedigna de l'original perquè el resultat final fora un procés absolutament immersiu.Així, el metaverso està compost per figures modelades en 3D al costat de quatre projeccions digitalitzades en pantalles flotants amb les quals l'usuari podrà interactuar segons es vaja acostant, bé mitjançant els comandaments de l'ordinador, bé a través d'ulleres de realitat virtual. L'objectiu és que quan l'usuari s'acoste a cadascuna de les projeccions tinga la sensació d'una immersió quasi completa en fondre's amb el contingut audiovisual que li genere una experiència intimista i molt real."
---
# mT5 (base model), fine-tuned on the *Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA)* dataset for Catalan
The mT5 model was presented in [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. The base version of the mT5 model is pre-trained in 101 languages, including English, Spanish, Italian, Catalan and other ones.
# Model description
The mT5-base model has been fine-tuned for abstractive text summarization for Catalan.
# Training data
The mT5-base model has been fine-tuned on *the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA)* dataset, specifically with the Catalan articles. The Catalan subset contains 636.596 document-summary pairs of Catalan news articles.
The DACSA dataset can be requested at the following address: https://xarrador.dsic.upv.es/resources/dacsa
# Intended uses & limitations
The model can be used for text summarization, especially in news articles.
# How to use
You can use the summarization model with the [pipeline API](https://huggingface.co/transformers/main_classes/pipelines.html):
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="ELiRF/mt5-base-dacsa-ca")
ARTICLE = """La Universitat Politècnica de València (UPV), a través del
projecte Atenea “plataforma de dones, art i tecnologia” i en col·laboració amb
les companyies tecnològiques Metric Salad i Zetalab, ha digitalitzat i modelat
en 3D per a la 35a edició del Festival Dansa València, que se celebra del 2 al
10 d'abril, la primera peça de dansa en un metaverso específic. La peça No és
amor, dirigida per Lara Misó, forma part de la programació d'aquesta edició del
Festival Dansa València i explora la figura geomètrica del cercle des de totes
les seues perspectives: espacial, corporal i compositiva. No és amor està
inspirada en el treball de l'artista japonesa Yayoi Kusama i mira de prop les
diferents facetes d'una obsessió. Així dona cabuda a la insistència, la
repetició, el trastorn, la hipnosi i l'alliberament. El procés de
digitalització, materialitzat per Metric Salad i ZetaLab, ha sigut complex
respecte a uns altres ja realitzats a causa de l'enorme desafiament que
comporta el modelatge en 3D de cossos en moviment al ritme de la composició de
l'obra. L'objectiu era generar una experiència el més realista possible i
fidedigna de l'original perquè el resultat final fora un procés absolutament
immersiu.Així, el metaverso està compost per figures modelades en 3D al costat
de quatre projeccions digitalitzades en pantalles flotants amb les quals
l'usuari podrà interactuar segons es vaja acostant, bé mitjançant els
comandaments de l'ordinador, bé a través d'ulleres de realitat virtual.
L'objectiu és que quan l'usuari s'acoste a cadascuna de les projeccions tinga
la sensació d'una immersió quasi completa en fondre's amb el contingut
audiovisual que li genere una experiència intimista i molt real.
"""
print(summarizer(ARTICLE, truncation=True))
>>>[{'summary_text': "La Universitat Politècnica de València ha digitalitzat i modelat en 3D la primera peça de dansa en un metaverso específic."}]
```
### BibTeX entry
```bibtex
@inproceedings{segarra-soriano-etal-2022-dacsa,
title = "{DACSA}: A large-scale Dataset for Automatic summarization of {C}atalan and {S}panish newspaper Articles",
author = "Segarra Soriano, Encarnaci{\'o}n and
Ahuir, Vicent and
Hurtado, Llu{\'\i}s-F. and
Gonz{\'a}lez, Jos{\'e}",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.434",
pages = "5931--5943",
abstract = "The application of supervised methods to automatic summarization requires the availability of adequate corpora consisting of a set of document-summary pairs. As in most Natural Language Processing tasks, the great majority of available datasets for summarization are in English, making it difficult to develop automatic summarization models for other languages. Although Spanish is gradually forming part of some recent summarization corpora, it is not the same for minority languages such as Catalan.In this work, we describe the construction of a corpus of Catalan and Spanish newspapers, the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA) corpus. It is a high-quality large-scale corpus that can be used to train summarization models for Catalan and Spanish.We have carried out an analysis of the corpus, both in terms of the style of the summaries and the difficulty of the summarization task. In particular, we have used a set of well-known metrics in the summarization field in order to characterize the corpus. Additionally, for benchmarking purposes, we have evaluated the performances of some extractive and abstractive summarization systems on the DACSA corpus.",
}
``` |
Hate-speech-CNERG/bengali-abusive-MuRIL | afb4d3694dbaed80156e4e947cef6572d3759e4d | 2022-05-03T08:50:49.000Z | [
"pytorch",
"bert",
"text-classification",
"bn",
"arxiv:2204.12543",
"transformers",
"license:afl-3.0"
] | text-classification | false | Hate-speech-CNERG | null | Hate-speech-CNERG/bengali-abusive-MuRIL | 27 | null | transformers | 7,469 | ---
language: [bn]
license: afl-3.0
---
This model is used detecting **abusive speech** in **Bengali**. It is finetuned on MuRIL model using bengali abusive speech dataset.
The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive)
LABEL_0 :-> Normal
LABEL_1 :-> Abusive
### For more details about our paper
Mithun Das, Somnath Banerjee and Animesh Mukherjee. "[Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages](https://arxiv.org/abs/2204.12543)". Accepted at ACM HT 2022.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{das2022data,
title={Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages},
author={Das, Mithun and Banerjee, Somnath and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2204.12543},
year={2022}
}
~~~ |
lilitket/aspram | b1646875d257de1e8325e01dbd0a5e5cff11c4fb | 2022-05-03T17:41:17.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/aspram | 27 | null | transformers | 7,470 | Entry not found |
allenai/mtk-instruct-3b-def-pos | a61092a4518022ceebc66aba0d86a68622764035 | 2022-05-27T06:29:55.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"multilingual",
"dataset:natural instructions v2.0",
"arxiv:1910.10683",
"arxiv:2204.07705",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | allenai | null | allenai/mtk-instruct-3b-def-pos | 27 | null | transformers | 7,471 | ---
language: multilingual
license: apache-2.0
datasets:
- natural instructions v2.0
---
# Model description
Tk-Instruct is a series of encoder-decoder Transformer models that are trained to solve various NLP tasks by following in-context instructions (plain language task definitions, k-shot examples, explanations, etc). Built upon the pre-trained [T5 models](https://arxiv.org/abs/1910.10683), they are fine-tuned on a large number of tasks & instructions that are collected in the [Natural Instructions benchmark](https://github.com/allenai/natural-instructions), which contains 1600+ tasks in 70+ broach categories in total. This enables the model to not only process the training tasks, but also generalize to many unseen tasks without further parameter update.
More resources for using the model:
- **Paper**: [link](https://arxiv.org/abs/2204.07705)
- **Code repository**: [Tk-Instruct](https://github.com/yizhongw/Tk-Instruct)
- **Official Website**: [Natural Instructions](https://instructions.apps.allenai.org/)
- **All released models**: [allenai/tk-instruct](https://huggingface.co/models?search=allenai/tk-instruct)
## Intended uses & limitations
Tk-Instruct can be used to do many NLP tasks by following instructions.
### How to use
When instructing the model, task definition or demonstration examples or explanations should be prepended to the original input and fed into the model. You can easily try Tk-Instruct models as follows:
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("allenai/tk-instruct-3b-def")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("allenai/tk-instruct-3b-def")
>>> input_ids = tokenizer.encode(
"Definition: return the currency of the given country. Now complete the following example - Input: India. Output:",
return_tensors="pt")
>>> output = model.generate(input_ids, max_length=10)
>>> output = tokenizer.decode(output[0], skip_special_tokens=True) # model should output 'Indian Rupee'
>>> input_ids = tokenizer.encode(
"Definition: negate the following sentence. Input: John went to school. Output:",
return_tensors="pt")
>>> output = model.generate(input_ids, max_length=10)
>>> output = tokenizer.decode(output[0], skip_special_tokens=True) # model should output 'John did not go to shool.'
```
### Limitations
We are still working on understanding the behaviors of these models, but here are several issues we have found:
- Models are generally sensitive to the instruction. Sometimes rewording the instruction can lead to very different output.
- Models are not always compliant to the instruction. Sometimes the model don't follow your instruction (e.g., when you ask the model to generate one sentence, it might still generate one word or a long story).
- Models might totally fail on some tasks.
If you find serious issues or any interesting result, you are welcome to share with us!
## Training data
Tk-Instruct is trained using the tasks & instructions in [Natural Instructions benchmark](https://github.com/allenai/natural-instructions), which contains 1600+ tasks in 70+ broach categories in total. We follow the official train/test split. Tk-Instruct model series were trained using 757 tasks, and mTk-Instruct series were trained using 1271 tasks (including some non-English tasks).
The training tasks are in 64 broad categories, such as text categorization / question answering / sentiment analysis / summarization / grammar error detection / dialogue generation / etc. The other 12 categories are selected for evaluation.
## Training procedure
All our models are initialized from either T5 models or mT5 models. Because generating the output can be regarded as a form of language modeling, we used their [LM adapted version](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k). All data is converted into a text-to-text format, and models are fine-tuned to maximize the likelihood of the output sequence.
Our [released models](https://huggingface.co/models?search=allenai/tk-instruct) are in different sizes, and each of them was trained with a specific type of instruction encoding. For instance, `tk-instruct-3b-def-pos` was initialized from [t5-xl-lm-adapt](https://huggingface.co/google/t5-xl-lm-adapt), and it saw task definition & 2 positive examples as the instruction during training time.
Although they are trained with only one type of instruction encodings, we found they can usually work with other type of encodings at test time (see more in our paper).
### BibTeX entry and citation info
```bibtex
@article{wang2022benchmarking,
title={Benchmarking Generalization via In-Context Instructions on 1,600+ Language Tasks},
author={Yizhong Wang and Swaroop Mishra and Pegah Alipoormolabashi and Yeganeh Kordi and Amirreza Mirzaei and A. Arunkumar and Arjun Ashok and Arut Selvan Dhanasekaran and Atharva Naik and David Stap and Eshaan Pathak and Giannis Karamanolakis and Haizhi Gary Lai and Ishan Purohit and Ishani Mondal and Jacob Anderson and Kirby Kuznia and Krima Doshi and Maitreya Patel and Kuntal Kumar Pal and M. Moradshahi and Mihir Parmar and Mirali Purohit and Neeraj Varshney and Phani Rohitha Kaza and Pulkit Verma and Ravsehaj Singh Puri and Rushang Karia and Shailaja Keyur Sampat and Savan Doshi and Siddharth Deepak Mishra and Sujan C. Reddy and Sumanta Patro and Tanay Dixit and Xu-dong Shen and Chitta Baral and Yejin Choi and Hannaneh Hajishirzi and Noah A. Smith and Daniel Khashabi},
year={2022},
archivePrefix={arXiv},
eprint={2204.07705},
primaryClass={cs.CL},
}
``` |
aiola/roberta-base-corener | a59295582117c3706c06aa707799dcd26fbab4ab | 2022-07-03T14:15:40.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:Ontonotes",
"dataset:CoNLL04",
"transformers",
"NER",
"named entity recognition",
"RE",
"relation extraction",
"entity mention detection",
"EMD",
"coreference resolution",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | aiola | null | aiola/roberta-base-corener | 27 | null | transformers | 7,472 | ---
language:
- en
tags:
- NER
- named entity recognition
- RE
- relation extraction
- entity mention detection
- EMD
- coreference resolution
license: apache-2.0
datasets:
- Ontonotes
- CoNLL04
---
# CoReNer
## Demo
We released an online demo so you can easily play with the model. Check it out: [http://corener-demo.aiola-lab.com](http://corener-demo.aiola-lab.com).
The demo uses the [aiola/roberta-base-corener](https://huggingface.co/aiola/roberta-base-corener) model.
## Model description
A multi-task model for named-entity recognition, relation extraction, entity mention detection, and coreference resolution.
We model NER as a span classification task and relation extraction as a multi-label classification of (NER) span tuples.
Similarly, model EMD as a span classification task and CR as a binary classification of (EMD) span tuples.
To construct the CR clusters, we keep the top antecedent of each mention, then compute the connected components of the mentions' undirected graph.
The model was trained to recognize:
- Entity types: GPE, ORG, PERSON, DATE, NORP, CARDINAL, MONEY, PERCENT, WORK_OF_ART, ORDINAL, EVENT, LOC, TIME, FAC, QUANTITY, LAW, PRODUCT, LANGUAGE.
- Relation types: Kill, Live_In, Located_In, OrgBased_In, Work_For.
## Usage example
See additional details and usage examples at: https://github.com/aiola-lab/corener.
```python
import json
from transformers import AutoTokenizer
from corener.models import Corener, ModelOutput
from corener.data import MTLDataset
from corener.utils.prediction import convert_model_output
tokenizer = AutoTokenizer.from_pretrained("aiola/roberta-base-corener")
model = Corener.from_pretrained("aiola/roberta-base-corener")
model.eval()
examples = [
"Apple Park is the corporate headquarters of Apple Inc., located in Cupertino, California, United States. It was opened to employees in April 2017, while construction was still underway, and superseded the original headquarters at 1 Infinite Loop, which opened in 1993."
]
dataset = MTLDataset(
types=model.config.types,
tokenizer=tokenizer,
train_mode=False,
)
dataset.read_dataset(examples)
example = dataset.get_example(0) # get first example
output: ModelOutput = model(
input_ids=example.encodings,
context_masks=example.context_masks,
entity_masks=example.entity_masks,
entity_sizes=example.entity_sizes,
entity_spans=example.entity_spans,
entity_sample_masks=example.entity_sample_masks,
inference=True,
)
print(json.dumps(convert_model_output(output=output, batch=example, dataset=dataset), indent=2))
```
|
charsiu/g2p_multilingual_mT5_small | 9aa1a3006e408f2420cb6a0b8ceac7768095ead9 | 2022-05-19T05:01:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | charsiu | null | charsiu/g2p_multilingual_mT5_small | 27 | null | transformers | 7,473 | Entry not found |
Matthijs/deeplabv3-mobilevit-small | 3489480174ccb992f903e63e380037c61d9da27e | 2022-05-24T11:35:51.000Z | [
"pytorch",
"coreml",
"mobilevit",
"dataset:pascal-voc",
"arxiv:2110.02178",
"arxiv:1706.05587",
"transformers",
"vision",
"image-segmentation",
"license:other"
] | image-segmentation | false | Matthijs | null | Matthijs/deeplabv3-mobilevit-small | 27 | 1 | transformers | 7,474 | ---
license: other
tags:
- vision
- image-segmentation
datasets:
- pascal-voc
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
---
# MobileViT + DeepLabV3 (small-sized model)
MobileViT model pre-trained on PASCAL VOC at resolution 512x512. It was introduced in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari, and first released in [this repository](https://github.com/apple/ml-cvnets). The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE).
Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MobileViT is a light-weight, low latency convolutional neural network that combines MobileNetV2-style layers with a new block that replaces local processing in convolutions with global processing using transformers. As with ViT (Vision Transformer), the image data is converted into flattened patches before it is processed by the transformer layers. Afterwards, however, the patches are "unflattened" back into feature maps. This allows the MobileViT-block to be placed anywhere inside a CNN. MobileViT does not require any positional embeddings.
The model in this repo adds a [DeepLabV3](https://arxiv.org/abs/1706.05587) head to the MobileViT backbone for semantic segmentation.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=mobilevit) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MobileViTFeatureExtractor, MobileViTForSemanticSegmentation
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileViTFeatureExtractor.from_pretrained('Matthijs/deeplabv3-mobilevit-small')
model = MobileViTForSemanticSegmentation.from_pretrained('Matthijs/deeplabv3-mobilevit-small')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_mask = logits.argmax(1).squeeze(0)
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The MobileViT + DeepLabV3 model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes, and then fine-tuned on the [PASCAL VOC2012](http://host.robots.ox.ac.uk/pascal/VOC/) dataset.
## Training procedure
### Preprocessing
At inference time, images are center-cropped at 512x512. Pixels are normalized to the range [0, 1]. Images are expected to be in BGR pixel order, not RGB.
### Pretraining
The MobileViT networks are trained from scratch for 300 epochs on ImageNet-1k on 8 NVIDIA GPUs with an effective batch size of 1024 and learning rate warmup for 3k steps, followed by cosine annealing. Also used were label smoothing cross-entropy loss and L2 weight decay. Training resolution varies from 160x160 to 320x320, using multi-scale sampling.
To obtain the DeepLabV3 model, MobileViT was fine-tuned on the PASCAL VOC dataset using 4 NVIDIA A100 GPUs.
## Evaluation results
| Model | PASCAL VOC mIOU | # params | URL |
|------------------|-----------------|-----------|--------------------------------------------------------------|
| MobileViT-XXS | 73.6 | 1.9 M | https://huggingface.co/Matthijs/deeplabv3-mobilevit-xx-small |
| MobileViT-XS | 77.1 | 2.9 M | https://huggingface.co/Matthijs/deeplabv3-mobilevit-x-small |
| **MobileViT-S** | **79.1** | **6.4 M** | https://huggingface.co/Matthijs/deeplabv3-mobilevit-small |
### BibTeX entry and citation info
```bibtex
@inproceedings{vision-transformer,
title = {MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer},
author = {Sachin Mehta and Mohammad Rastegari},
year = {2022},
URL = {https://arxiv.org/abs/2110.02178}
}
```
|
JeffreyLau/SikuGPT2 | 4220814c81e49ef1123795b8855a39c613579380 | 2022-07-10T01:30:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"zh",
"transformers"
] | text-generation | false | JeffreyLau | null | JeffreyLau/SikuGPT2 | 27 | 1 | transformers | 7,475 | ---
language: zh
widget:
- text: "當 是 時 "
- text: "子 曰 "
---
# SikuGPT2 Model
## Model description
The model is used to generate Chinese ancient article. You can download the model via HuggingFace from the link [SikuGPT2](https://huggingface.co/JeffreyLau/SikuGPT2).
Since the parameter skip_special_tokens is used in the pipelines.py, special tokens such as [SEP], [UNK] will be deleted, the output results of Hosted inference API (right) may not be properly displayed.
## How to use
You can use the model directly with a pipeline for text generation:
When the parameter skip_special_tokens is True:
```python
>>> from transformers import BertTokenizer, GPT2LMHeadModel,TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("JeffreyLau/SikuGPT2")
>>> model = GPT2LMHeadModel.from_pretrained("JeffreyLau/SikuGPT2")
>>> text_generator = TextGenerationPipeline(model, tokenizer)
>>> text_generator("當 是 時 ", max_length=100, do_sample=True)
[{'generated_text': '當 是 時 王 世 充 已 在 西 夏 恐 兵 出 相 擊 則 遣 信 報 之 且 曰 必 以 五 百 騎 徑 渡 江 由 是 中 國 稍 安 今 賊 既 渡 江 必 無 東 救 上 曰 信 可 謂 不 亡 矣 世 充 將 何 從 與 之 書 使 者 來 上 既 見 信 書 即 遣 二 將 邀 之 使 者 皆 已 去 上 問 之 信 曰 汝 之 去 將 何 以 為 效 對 曰 吾 聞 上 使 者 至 即 令 其 人 還 信 答 書 曰 臣 受 漢 恩 厚 無 以 報 塞 然 所 以 不 從 者 誠 以 天 地 之 德 尚 寛 不 殺 之 恩 豈 待 吾 命 而 自 殺 耶 昔 劉 累 為 漢 將 不 受 命 乃 自 為 主 爾 今 既 為 漢 將 不 受 命 乃 自 殺 以 自 安 耳 上 曰 善 而 以 問 張 子 房 趙 李 牧 張 子 房 皆 言 可 與 為 盟 主 也 其 後 漢 亡 張 魯 反 於 西 河 王 霸 為 漢 公 主 求 和 乃 上 書 求 和 於 上 曰 臣 聞 古 之 受 命 者 惟 太 公 得 之 故 曰 上 天 降 威 以 作 民 主 夫 豈 能 以 一 人 之 身 而 制 天 下 之 大 敵 哉 太 公 得 之 故 曰 大 公 者 何 也 曰 夫 受 命 者 必 有 天 下 為 天 下 所 尊 服 不 必 皆 得 其 人 也 古 者 天 子 之 命 臣 為 天 子 者 皆 為 君 之 子 今 天 下 皆 為 臣 之 子 茍 不 得 其 道 則 一 人 之 身 百 姓 何 所 賴 之 可 得 然 則 命 之 不 可 謂 之 命 矣 上 曰 古 之 受 命 者 奈 何 對 曰 上 古 之 帝 也 命 已 絶 而 天 下 不 復 定 天 必 祚 之 故 命 之 不 可 謂 之 有 天 下 也 天 下 各 保 其 社 稷 其 餘 衆 官 無 有 分'}]
```
When the parameter skip_special_tokens is False:
```python
>>> from transformers import BertTokenizer, GPT2LMHeadModel,TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("JeffreyLau/SikuGPT2")
>>> model = GPT2LMHeadModel.from_pretrained("JeffreyLau/SikuGPT2")
>>> text_generator = TextGenerationPipeline(model, tokenizer)
>>> text_generator("當 是 時 ", max_length=100, do_sample=True)
[{'generated_text': '當 是 時 王 世 充 已 在 西 夏 恐 兵 出 相 擊 則 遣 信 報 之 且 曰 必 以 五 百 騎 徑 渡 江 由 是 中 國 稍 安 今 賊 既 渡 江 必 無 東 救 上 曰 信 可 謂 不 亡 矣 世 充 將 何 從 與 之 書 使 者 來 上 既 見 信 書 即 遣 二 將 邀 之 使 者 皆 已 去 上 問 之 信 曰 汝 之 去 將 何 以 為 效 對 曰 吾 聞 上 使 者 至 即 令 其 人 還 信 答 書 曰 臣 受 漢 恩 厚 無 以 報 塞 然 所 以 不 從 者 誠 以 天 地 之 德 尚 寛 不 殺 之 恩 豈 待 吾 命 而 自 殺 耶 昔 劉 累 為 漢 將 不 受 命 乃 自 為 主 爾 今 既 為 漢 將 不 受 命 乃 自 殺 以 自 安 耳 上 曰 善 而 以 問 張 子 房 趙 李 牧 張 子 房 皆 言 可 與 為 盟 主 也 其 後 漢 亡 張 魯 反 於 西 河 王 霸 為 漢 公 主 求 和 乃 上 書 求 和 於 上 曰 臣 聞 古 之 受 命 者 惟 太 公 得 之 故 曰 上 天 降 威 以 作 民 主 夫 豈 能 以 一 人 之 身 而 制 天 下 之 大 敵 哉 太 公 得 之 故 曰 大 公 者 何 也 曰 夫 受 命 者 必 有 天 下 為 天 下 所 尊 服 不 必 皆 得 其 人 也 古 者 天 子 之 命 臣 為 天 子 者 皆 為 君 之 子 今 天 下 皆 為 臣 之 子 茍 不 得 其 道 則 一 人 之 身 百 姓 何 所 賴 之 可 得 然 則 命 之 不 可 謂 之 命 矣 上 曰 古 之 受 命 者 奈 何 對 曰 上 古 之 帝 也 命 已 絶 而 天 下 不 復 定 天 必 祚 之 故 命 之 不 可 謂 之 有 天 下 也 天 下 各 保 其 社 稷 其 餘 衆 官 無 有 分'}]
```
## Training data
“Siku Quanshu” full-text corpus was used as Training Data which is same as the project of [SikuBERT](https://huggingface.co/SIKU-BERT/sikubert) to train SikuGPT2.
## Training procedure
The model is Pre-trained by [run_clm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py). We pre-train the model with a sequence length of 512. We use extended vocabulary to handle out-of-vocabulary words.
## Citation
The paper has not been published. You can just cite this url instead. |
KamilAin/bart-base-booksum | 789ae1ed3e7e8da4ae759a7ab062f9afe907f04d | 2022-05-24T08:19:25.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:kmfoda/booksum",
"transformers",
"booksum",
"summary",
"summarization",
"book",
"license:apache-2.0",
"autotrain_compatible"
] | summarization | false | KamilAin | null | KamilAin/bart-base-booksum | 27 | null | transformers | 7,476 | ---
language: en
license: apache-2.0
tags:
- booksum
- summary
- summarization
- book
metrics:
- rouge
widget:
- text: "In the dead night, Frodo lay in a dream without light. Then he saw the young moon rising; under its thin light there loomed before him a black wall of rock, pierced by a dark arch like a great gate. It seemed to Frodo that he was lifted up, and passing over he saw that the rock-wall was a circle of hills, and that within it was a plain, and in the midst of the plain stood a pinnacle of stone, like a vast tower but not made by hands. On its top stood the figure of a man. The moon as it rose seemed to hang for a moment above his head and glistened in his white hair as the wind stirred it. Up from the dark plain below came the crying of fell voices, and the howling of many wolves. Suddenly a shadow, like the shape of great wings, passed across the moon. The figure lifted his arms and a light flashed from the staff that he wielded. A mighty eagle swept down and bore him away. The voices wailed and the wolves yammered. There was a noise like a strong wind blowing, and on it was borne the sound of hoofs, galloping, galloping, galloping from the East. ‘Black Riders!’ thought Frodo as he wakened, with the sound of the hoofs still echoing in his mind. He wondered if he would ever again have the courage to leave the safety of these stone walls. He lay motionless, still listening; but all was now silent, and at last he turned and fell asleep again or wandered into some other unremembered dream."
example_title: "book example"
datasets:
- kmfoda/booksum
---
# BART-base-Booksum
This is a BART-base model fine-tuned on a BookSum dataset
- **Use cases:** book summarization, general text summarization.
- This is a fine-tuned [`https://huggingface.co/facebook/bart-base`](https://huggingface.co/facebook/bart-base), it has been fine-tuned for five epochs
# Results
No results yet for that model
|
M47Labs/spanish_news_classification_headlines_untrained | 1105ed2f79d7bead889b8812dd0e9fd991c4fb38 | 2022-05-30T10:44:44.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | M47Labs | null | M47Labs/spanish_news_classification_headlines_untrained | 27 | null | transformers | 7,477 | ---
widget:
- text: "El dólar se dispara tras la reunión de la Fed"
---
# Spanish News Classification Headlines
SNCH: this model was developed by [M47Labs](https://www.m47labs.com/es/) the goal is text classification, the base model use was [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased), however this model has not been fine-tuned on any dataset. The objective is to show the performance of this model when is used with the objective of inference without training at all.
## Dataset validation Sample
Dataset size : 1000
Columns: idTask,task content 1,idTag,tag.
|task content|tag|
|------|------|
|Alcalá de Guadaíra celebra la IV Semana de la Diversidad Sexual con acciones de sensibilización|sociedad|
|El Archipiélago Chinijo Graciplus se impone en el Trofeo Centro Comercial Rubicón|deportes|
|Un total de 39 personas padecen ELA actualmente en la provincia|sociedad|
|Eurocopa 2021 : Italia vence a Gales y pasa a octavos con su candidatura reforzada|deportes|
|Resolución de 10 de junio de 2021, del Ayuntamiento de Tarazona de La Mancha (Albacete), referente a la convocatoria para proveer una plaza.|sociedad|
|El primer ministro sueco pierde una moción de censura|politica|
|El dólar se dispara tras la reunión de la Fed|economia|
## Labels:
* ciencia_tecnologia
* clickbait
* cultura
* deportes
* economia
* educacion
* medio_ambiente
* opinion
* politica
* sociedad
## Example of Use
### Pipeline
```{python}
import torch
from transformers import AutoTokenizer, BertForSequenceClassification,TextClassificationPipeline
review_text = 'los vehiculos que esten esperando pasajaeros deberan estar apagados para reducir emisiones'
path = "M47Labs/spanish_news_classification_headlines_untrained"
tokenizer = AutoTokenizer.from_pretrained(path)
model = BertForSequenceClassification.from_pretrained(path)
nlp = TextClassificationPipeline(task = "text-classification",
model = model,
tokenizer = tokenizer)
print(nlp(review_text))
```
```[{'label': 'medio_ambiente', 'score': 0.2834321384291023}]```
### Pytorch
```{python}
import torch
from transformers import AutoTokenizer, BertForSequenceClassification,TextClassificationPipeline
from numpy import np
model_name = 'M47Labs/spanish_news_classification_headlines_untrained'
MAX_LEN = 32
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
texto = "las emisiones estan bajando, debido a las medidas ambientales tomadas por el gobierno"
encoded_review = tokenizer.encode_plus(
texto,
max_length=MAX_LEN,
add_special_tokens=True,
#return_token_type_ids=False,
pad_to_max_length=True,
return_attention_mask=True,
return_tensors='pt',
)
input_ids = encoded_review['input_ids']
attention_mask = encoded_review['attention_mask']
output = model(input_ids, attention_mask)
_, prediction = torch.max(output['logits'], dim=1)
print(f'Review text: {texto}')
print(f'Sentiment : {model.config.id2label[prediction.detach().cpu().numpy()[0]]}')
```
```Review text: las emisiones estan bajando, debido a las medidas ambientales tomadas por el gobierno```
```Sentiment : opinion```
A more in depth example on how to use the model can be found in this colab notebook: https://colab.research.google.com/drive/1XsKea6oMyEckye2FePW_XN7Rf8v41Cw_?usp=sharing
## Validation Results
|Full Dataset||
|------|------|
|Accuracy Score|0.362|
|Precision (Macro)|0.21|
|Recall (Macro)|0.22|

|
projecte-aina/roberta-base-ca-v2 | 97e9d0f724fa61644f7f6972e4c19345c0dc4bb2 | 2022-07-25T06:55:23.000Z | [
"pytorch",
"roberta",
"fill-mask",
"ca",
"transformers",
"catalan",
"masked-lm",
"RoBERTa-base-ca-v2",
"CaText",
"Catalan Textual Corpus",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | projecte-aina | null | projecte-aina/roberta-base-ca-v2 | 27 | null | transformers | 7,478 | ---
language:
- ca
license: apache-2.0
tags:
- "catalan"
- "masked-lm"
- "RoBERTa-base-ca-v2"
- "CaText"
- "Catalan Textual Corpus"
widget:
- text: "El Català és una llengua molt <mask>."
- text: "Salvador Dalí va viure a <mask>."
- text: "La Costa Brava té les millors <mask> d'Espanya."
- text: "El cacaolat és un batut de <mask>."
- text: "<mask> és la capital de la Garrotxa."
- text: "Vaig al <mask> a buscar bolets."
- text: "Antoni Gaudí vas ser un <mask> molt important per la ciutat."
- text: "Catalunya és una referència en <mask> a nivell europeu."
---
# Catalan BERTa-v2 (roberta-base-ca-v2) base model
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-uses-and-limitations)
- [How to Use](#how-to-use)
- [Training](#training)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Evaluation](#evaluation)
- [CLUB Benchmark](#club-benchmark)
- [Evaluation Results](#evaluation-results)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Funding](#funding)
- [Contributions](#contributions)
</details>
## Model description
The **roberta-base-ca-v2** is a transformer-based masked language model for the Catalan language.
It is based on the [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) base model
and has been trained on a medium-size corpus collected from publicly available corpora and crawlers.
## Intended Uses and Limitations
**roberta-base-ca-v2** model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section).
However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition.
## How to Use
Here is how to use this model:
```python
from transformers import AutoModelForMaskedLM
from transformers import AutoTokenizer, FillMaskPipeline
from pprint import pprint
tokenizer_hf = AutoTokenizer.from_pretrained('projecte-aina/roberta-base-ca-v2')
model = AutoModelForMaskedLM.from_pretrained('projecte-aina/roberta-base-ca-v2')
model.eval()
pipeline = FillMaskPipeline(model, tokenizer_hf)
text = f"Em dic <mask>."
res_hf = pipeline(text)
pprint([r['token_str'] for r in res_hf])
```
## Training
### Training data
The training corpus consists of several corpora gathered from web crawling and public corpora.
| Corpus | Size in GB |
|-------------------------|------------|
| Catalan Crawling | 13.00 |
| Wikipedia | 1.10 |
| DOGC | 0.78 |
| Catalan Open Subtitles | 0.02 |
| Catalan Oscar | 4.00 |
| CaWaC | 3.60 |
| Cat. General Crawling | 2.50 |
| Cat. Goverment Crawling | 0.24 |
| ACN | 0.42 |
| Padicat | 0.63 |
| RacoCatalá | 8.10 |
| Nació Digital | 0.42 |
| Vilaweb | 0.06 |
| Tweets | 0.02 |
### Training Procedure
The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2)
used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 52,000 tokens.
The RoBERTa-ca-v2 pretraining consists of a masked language model training that follows the approach employed for the RoBERTa base model
with the same hyperparameters as in the original work.
The training lasted a total of 96 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM.
## Evaluation
### CLUB Benchmark
The BERTa model has been fine-tuned on the downstream tasks of the Catalan Language Understanding Evaluation benchmark (CLUB),
that has been created along with the model.
It contains the following tasks and their related datasets:
1. Named Entity Recognition (NER)
**[AnCora Catalan 2.0.0](https://zenodo.org/record/4762031#.YKaFjqGxWUk)**: extracted named entities from the original [Ancora](https://doi.org/10.5281/zenodo.4762030) version,
filtering out some unconventional ones, like book titles, and transcribed them into a standard CONLL-IOB format
2. Part-of-Speech Tagging (POS)
Catalan-Ancora: from the [Universal Dependencies treebank](https://github.com/UniversalDependencies/UD_Catalan-AnCora) of the well-known Ancora corpus.
3. Text Classification (TC)
**[TeCla](https://huggingface.co/datasets/projecte-aina/tecla)**: consisting of 137k news pieces from the Catalan News Agency ([ACN](https://www.acn.cat/)) corpus, with 30 labels.
4. Textual Entailment (TE)
**[TECa](https://huggingface.co/datasets/projecte-aina/teca)**: consisting of 21,163 pairs of premises and hypotheses, annotated according to the inference relation they have (implication, contradiction, or neutral), extracted from the [Catalan Textual Corpus](https://huggingface.co/datasets/projecte-aina/catalan_textual_corpus).
5. Semantic Textual Similarity (STS)
**[Catalan semantic textual similarity](https://huggingface.co/datasets/projecte-aina/sts-ca)**: consisting of more than 3000 sentence pairs, annotated with the semantic similarity between them, scraped from the [Catalan Textual Corpus](https://huggingface.co/datasets/projecte-aina/catalan_textual_corpus).
6. Question Answering (QA):
**[VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad)**: contains 6,282 pairs of questions and answers, outsourced from 2095 Catalan language articles from VilaWeb newswire text.
**[ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad)**: consisting of more than 15,000 questions outsourced from Catalan Wikipedia randomly chosen from a set of 596 articles that were originally written in Catalan.
**[CatalanQA](https://huggingface.co/datasets/projecte-aina/catalanqa)**: an aggregation of 2 previous datasets (VilaQuAD and ViquiQuAD), 21,427 pairs of Q/A balanced by type of question, containing one question and one answer per context, although the contexts can repeat multiple times.
**[XQuAD](https://huggingface.co/datasets/projecte-aina/xquad-ca)**: the Catalan translation of XQuAD, a multilingual collection of manual translations of 1,190 question-answer pairs from English Wikipedia used only as a _test set_.
Here are the train/dev/test splits of the datasets:
| Task (Dataset) | Total | Train | Dev | Test |
|:--|:--|:--|:--|:--|
| NER (Ancora) |13,581 | 10,628 | 1,427 | 1,526 |
| POS (Ancora)| 16,678 | 13,123 | 1,709 | 1,846 |
| STS | 3,073 | 2,073 | 500 | 500 |
| TC (TeCla) | 137,775 | 110,203 | 13,786 | 13,786|
| TE (TECa) | 21,163 | 16,930 | 2,116 | 2,117
| QA (VilaQuAD) | 6,282 | 3,882 | 1,200 | 1,200 |
| QA (ViquiQuAD) | 14,239 | 11,255 | 1,492 | 1,429 |
| QA (CatalanQA) | 21,427 | 17,135 | 2,157 | 2,135 |
### Evaluation Results
| Task | NER (F1) | POS (F1) | STS (Comb) | TC (Acc.) | TE (Acc.) | QA (VilaQuAD) (F1/EM)| QA (ViquiQuAD) (F1/EM) | QA (CatalanQA) (F1/EM) | QA (XQuAD-Ca)<sup>1</sup> (F1/EM) |
| ------------|:-------------:| -----:|:------|:------|:-------|:------|:----|:----|:----|
| RoBERTa-base-ca-v2 | **89.45** | 99.09 | 79.07 | **74.26** | **83.14** | **87.74/72.58** | **88.72/75.91** | **89.50**/76.63 | **73.64/55.42** |
| BERTa | 88.94 | **99.10** | **80.19** | 73.65 | 79.26 | 85.93/70.58 | 87.12/73.11 | 89.17/**77.14** | 69.20/51.47 |
| mBERT | 87.36 | 98.98 | 74.26 | 69.90 | 74.63 | 82.78/67.33 | 86.89/73.53 | 86.90/74.19 | 68.79/50.80 |
| XLM-RoBERTa | 88.07 | 99.03 | 61.61 | 70.14 | 33.30 | 86.29/71.83 | 86.88/73.11 | 88.17/75.93 | 72.55/54.16 |
<sup>1</sup> : Trained on CatalanQA, tested on XQuAD-Ca.
## Licensing Information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation Information
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
## Contributions
[N/A]
|
titi7242229/roberta-base-bne-finetuned_personality_multi_2 | 68789c67db2f1d79227c752c3ec00ee570675d7d | 2022-06-11T06:21:27.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | titi7242229 | null | titi7242229/roberta-base-bne-finetuned_personality_multi_2 | 27 | null | transformers | 7,479 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned_personality_multi_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned_personality_multi_2
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2983
- Accuracy: 0.5429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.3256 | 1.0 | 125 | 2.2642 | 0.2161 |
| 1.815 | 2.0 | 250 | 1.9569 | 0.3919 |
| 1.614 | 3.0 | 375 | 1.7264 | 0.5014 |
| 1.1718 | 4.0 | 500 | 1.6387 | 0.5239 |
| 1.135 | 5.0 | 625 | 1.6259 | 0.5245 |
| 0.5637 | 6.0 | 750 | 1.6443 | 0.5372 |
| 0.3672 | 7.0 | 875 | 1.7146 | 0.5326 |
| 0.3249 | 8.0 | 1000 | 1.8099 | 0.5297 |
| 0.1791 | 9.0 | 1125 | 1.8888 | 0.5285 |
| 0.2175 | 10.0 | 1250 | 1.9228 | 0.5326 |
| 0.0465 | 11.0 | 1375 | 1.9753 | 0.5435 |
| 0.1154 | 12.0 | 1500 | 2.1102 | 0.5256 |
| 0.0745 | 13.0 | 1625 | 2.1319 | 0.5429 |
| 0.0281 | 14.0 | 1750 | 2.1743 | 0.5360 |
| 0.0173 | 15.0 | 1875 | 2.2087 | 0.5441 |
| 0.0269 | 16.0 | 2000 | 2.2456 | 0.5424 |
| 0.0107 | 17.0 | 2125 | 2.2685 | 0.5458 |
| 0.0268 | 18.0 | 2250 | 2.2893 | 0.5383 |
| 0.0245 | 19.0 | 2375 | 2.2943 | 0.5418 |
| 0.0156 | 20.0 | 2500 | 2.2983 | 0.5429 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ahmeddbahaa/mT5_multilingual_XLSum-finetune-ar-xlsum | 69531cb8276ee80c3d24f3d2a3025241d9ecb83f | 2022-06-13T19:20:20.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:xlsum",
"transformers",
"summarization",
"mT5_multilingual_XLSum",
"abstractive summarization",
"ar",
"xlsum",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | summarization | false | ahmeddbahaa | null | ahmeddbahaa/mT5_multilingual_XLSum-finetune-ar-xlsum | 27 | null | transformers | 7,480 | ---
tags:
- summarization
- mT5_multilingual_XLSum
- mt5
- abstractive summarization
- ar
- xlsum
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: mT5_multilingual_XLSum-finetune-ar-xlsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetune-ar-xlsum
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2497
- Rouge-1: 32.52
- Rouge-2: 14.71
- Rouge-l: 27.88
- Gen Len: 41.45
- Bertscore: 74.65
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 3.5465 | 1.0 | 585 | 3.3215 | 30.09 | 13.23 | 26.07 | 36.31 | 73.97 |
| 3.3564 | 2.0 | 1170 | 3.2547 | 31.29 | 13.93 | 26.75 | 41.68 | 74.22 |
| 3.2185 | 3.0 | 1755 | 3.2421 | 31.78 | 14.1 | 27.07 | 41.64 | 74.4 |
| 3.1145 | 4.0 | 2340 | 3.2241 | 31.98 | 14.38 | 27.51 | 40.29 | 74.46 |
| 3.031 | 5.0 | 2925 | 3.2313 | 32.3 | 14.67 | 27.83 | 39.81 | 74.61 |
| 2.9627 | 6.0 | 3510 | 3.2348 | 32.39 | 14.65 | 27.76 | 40.02 | 74.6 |
| 2.9088 | 7.0 | 4095 | 3.2439 | 32.5 | 14.66 | 27.81 | 41.2 | 74.65 |
| 2.8649 | 8.0 | 4680 | 3.2497 | 32.52 | 14.71 | 27.88 | 41.45 | 74.65 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Mathking/pubmedbert-abs_pri-sec_out | c50df4d0e54b03886f15f1b4c76a80cd901bfb06 | 2022-07-19T09:44:30.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"transformers",
"medical-domain",
"fine-tuned",
"license:mit",
"autotrain_compatible"
] | token-classification | false | Mathking | null | Mathking/pubmedbert-abs_pri-sec_out | 27 | null | transformers | 7,481 | ---
language: en
tags:
- medical-domain
- fine-tuned
license: "mit"
metrics:
- f1
---
# PubMedBERT Abstract Primary and secondary outcomes
## Model description
PubMedBERT Model fine tuned for Primary and Secondary Outcomes Entity Extraction in Clinical Trials Articles
## Intended uses & limitations
### How to use
### Limitations and bias
## Training data
Dataset from Anna Koroleva (https://github.com/aakorolyova/DeSpin-2.0/tree/main/data/Primary_Secondary_Outcomes)
## Evaluation results
### BibTeX entry and citation info
@inproceedings{koroleva-etal-2020-despin,
title = "{D}e{S}pin: a prototype system for detecting spin in biomedical publications",
author = "Koroleva, Anna and
Kamath, Sanjay and
Bossuyt, Patrick and
Paroubek, Patrick",
booktitle = "Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.bionlp-1.5",
doi = "10.18653/v1/2020.bionlp-1.5",
pages = "49--59",
abstract = "Improving the quality of medical research reporting is crucial to reduce avoidable waste in research and to improve the quality of health care. Despite various initiatives aiming at improving research reporting {--} guidelines, checklists, authoring aids, peer review procedures, etc. {--} overinterpretation of research results, also known as spin, is still a serious issue in research reporting. In this paper, we propose a Natural Language Processing (NLP) system for detecting several types of spin in biomedical articles reporting randomized controlled trials (RCTs). We use a combination of rule-based and machine learning approaches to extract important information on trial design and to detect potential spin. The proposed spin detection system includes algorithms for text structure analysis, sentence classification, entity and relation extraction, semantic similarity assessment. Our algorithms achieved operational performance for the these tasks, F-measure ranging from 79,42 to 97.86{\%} for different tasks. The most difficult task is extracting reported outcomes. Our tool is intended to be used as a semi-automated aid tool for assisting both authors and peer reviewers to detect potential spin. The tool incorporates a simple interface that allows to run the algorithms and visualize their output. It can also be used for manual annotation and correction of the errors in the outputs. The proposed tool is the first tool for spin detection. The tool and the annotated dataset are freely available.",
}
|
anablasi/lm_financial_v2 | 7289bcbd1edc01b7b116583a8e7659aabd6fd983 | 2022-07-03T15:53:21.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | anablasi | null | anablasi/lm_financial_v2 | 27 | null | transformers | 7,482 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: modelo_lm_financial
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modelo_lm_financial
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
KoichiYasuoka/deberta-large-japanese-unidic-ud-head | b94208e97f76bbe927722393d57ac3bac265b85d | 2022-07-20T03:52:09.000Z | [
"pytorch",
"deberta-v2",
"question-answering",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | question-answering | false | KoichiYasuoka | null | KoichiYasuoka/deberta-large-japanese-unidic-ud-head | 27 | null | transformers | 7,483 | ---
language:
- "ja"
tags:
- "japanese"
- "question-answering"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "question-answering"
widget:
- text: "国語"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "教科書"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "の"
context: "全学年にわたって小学校の国語[MASK]教科書に挿し絵が用いられている"
---
# deberta-large-japanese-unidic-ud-head
## Model Description
This is a DeBERTa(V2) model pretrained on 青空文庫 for dependency-parsing (head-detection on long-unit-words) as question-answering, derived from [deberta-large-japanese-unidic](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-unidic) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForQuestionAnswering
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-large-japanese-unidic-ud-head")
model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/deberta-large-japanese-unidic-ud-head")
question="国語"
context="全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
inputs=tokenizer(question,context,return_tensors="pt")
outputs=model(**inputs)
start,end=torch.argmax(outputs.start_logits),torch.argmax(outputs.end_logits)
print(tokenizer.convert_ids_to_tokens(inputs["input_ids"][0,start:end+1]))
```
or
```py
from transformers import (AutoTokenizer,AutoModelForQuestionAnswering,
AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline)
class TaggerPipeline(TokenClassificationPipeline):
def __call__(self,text):
d=super().__call__(text)
if len(d)>0 and ("start" not in d[0] or d[0]["start"]==None):
import tokenizations
v=[x["word"].replace(" ","") for x in d]
a2b,b2a=tokenizations.get_alignments(v,text)
for i,t in enumerate(a2b):
s,e=(0,0) if t==[] else (t[0],t[-1]+1)
if v[i].startswith(self.tokenizer.unk_token):
s=([[-1]]+[x for x in a2b[0:i] if x>[]])[-1][-1]+1
if v[i].endswith(self.tokenizer.unk_token):
e=([x for x in a2b[i+1:] if x>[]]+[[len(text)]])[0][0]
d[i]["start"],d[i]["end"]=s,e
return d
class TransformersSlowUD(object):
def __init__(self,bert):
import os
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForQuestionAnswering.from_pretrained(bert)
x=AutoModelForTokenClassification.from_pretrained
if os.path.isdir(bert):
d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger"))
else:
from transformers.file_utils import hf_bucket_url
c=AutoConfig.from_pretrained(hf_bucket_url(bert,"deprel/config.json"))
d=x(hf_bucket_url(bert,"deprel/pytorch_model.bin"),config=c)
s=AutoConfig.from_pretrained(hf_bucket_url(bert,"tagger/config.json"))
t=x(hf_bucket_url(bert,"tagger/pytorch_model.bin"),config=s)
self.deprel=TaggerPipeline(model=d,tokenizer=self.tokenizer,
aggregation_strategy="simple")
self.tagger=TaggerPipeline(model=t,tokenizer=self.tokenizer)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)]
z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w)
r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan)
v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[]
for i,t in enumerate(v):
q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id]
c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]])
b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c]
with torch.no_grad():
d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]),
token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b]))
s,e=d.start_logits.tolist(),d.end_logits.tolist()
for i in range(n):
for j in range(n):
m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
i=([p for s,e,p in w]+["root"]).index("root")
j=i+1 if i<n else numpy.nanargmax(m[:,0])
m[0:j,0]=m[j+1:,0]=numpy.nan
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text.replace("\n"," ")+"\n"
for i,(s,e,p) in enumerate(w,1):
p="root" if h[i]==0 else "dep" if p=="root" else p
u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]),
str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=TransformersSlowUD("KoichiYasuoka/deberta-large-japanese-unidic-ud-head")
print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
[fugashi](https://pypi.org/project/fugashi) [unidic-lite](https://pypi.org/project/unidic-lite) [pytokenizations](https://pypi.org/project/pytokenizations) and [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/) required.
|
sherover125/newsclassifier | b92ff2bf008f2eea5e6511a8d72af6fb321c50d5 | 2022-07-20T09:24:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | sherover125 | null | sherover125/newsclassifier | 27 | null | transformers | 7,484 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: newsclassifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# newsclassifier
This model is a fine-tuned version of [HooshvareLab/bert-fa-zwnj-base](https://huggingface.co/HooshvareLab/bert-fa-zwnj-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1405
- Matthews Correlation: 0.9731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.2207 | 1.0 | 2397 | 0.1706 | 0.9595 |
| 0.0817 | 2.0 | 4794 | 0.1505 | 0.9663 |
| 0.0235 | 3.0 | 7191 | 0.1405 | 0.9731 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
alistairmcleay/user-simulator-gpt2 | 2d0fdf00aec555a7a610a6f33142cb4a7e53235b | 2022-06-26T15:14:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:wtfpl"
] | text-generation | false | alistairmcleay | null | alistairmcleay/user-simulator-gpt2 | 27 | null | transformers | 7,485 | ---
license: wtfpl
---
|
fujiki/gpt-neo-en2ja-125M | 9e24f4b3d85bdb18e6b3bb6b9b5591f3d2111694 | 2022-06-27T17:06:53.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | fujiki | null | fujiki/gpt-neo-en2ja-125M | 27 | null | transformers | 7,486 | Entry not found |
BigSalmon/InformalToFormalLincoln53 | 8bbd1a36731987e5ff47b1b9b34176a7827aac28 | 2022-07-01T00:59:52.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln53 | 27 | null | transformers | 7,487 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln53")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln53")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
``` |
Vkt/model-dataaugmentationpipe | e05e78d9000e7d7ed5ebbf2d1d66d76a0bf5a70c | 2022-07-05T17:48:43.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Vkt | null | Vkt/model-dataaugmentationpipe | 27 | null | transformers | 7,488 | Entry not found |
tau/spider-nq-ctx-encoder | 60a588a491c5470d2a0fe4229a4eb1691b58aa9a | 2022-07-04T08:32:49.000Z | [
"pytorch",
"dpr",
"transformers"
] | null | false | tau | null | tau/spider-nq-ctx-encoder | 27 | null | transformers | 7,489 | Entry not found |
ShihTing/PanJuOffset_TwoClass | f0529cee629895a25672ca87c3ac41b93c095b93 | 2022-07-05T06:49:03.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | ShihTing | null | ShihTing/PanJuOffset_TwoClass | 27 | null | transformers | 7,490 | ---
license: apache-2.0
tags:
- vision
- image-classification
widget:
- src: https://datasets-server.huggingface.co/assets/ShihTing/IsCausewayOffset/--/ShihTing--IsCausewayOffset/validation/0/image/image.jpg
example_title: Ex1
---
# PanJu offset detect by image
Use fintune from google/vit-base-patch16-224(https://huggingface.co/google/vit-base-patch16-224)
## Dataset
```python
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 329
})
validation: Dataset({
features: ['image', 'label'],
num_rows: 56
})
})
```
36 Break and 293 Normal in train
5 Break and 51 Normal in validation
## Intended uses
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
# Load image
import torch
from transformers import ViTFeatureExtractor, ViTForImageClassification,AutoModel
from PIL import Image
import requests
url='https://datasets-server.huggingface.co/assets/ShihTing/IsCausewayOffset/--/ShihTing--IsCausewayOffset/validation/0/image/image.jpg'
image = Image.open(requests.get(url, stream=True).raw)
# Load model
from transformers import AutoFeatureExtractor, AutoModelForImageClassification
device = torch.device('cpu')
extractor = AutoFeatureExtractor.from_pretrained('ShihTing/PanJuOffset_TwoClass')
model = AutoModelForImageClassification.from_pretrained('ShihTing/PanJuOffset_TwoClass')
# Predict
inputs = extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
Prob = outputs.logits.softmax(dim=-1).tolist()
print(Prob)
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
|
ryo0634/bert-base-zip-dependency-flat-0 | 7262b7a6754346a6684f1440bd518a6f76774982 | 2022-07-08T04:47:53.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ryo0634 | null | ryo0634/bert-base-zip-dependency-flat-0 | 27 | null | transformers | 7,491 | Entry not found |
Mimita6654/AI4Code-01 | 37e2fd8a4cc5bde6a65d1339cf444e5619621957 | 2022-07-09T15:06:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Mimita6654 | null | Mimita6654/AI4Code-01 | 27 | null | transformers | 7,492 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: AI4Code-01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AI4Code-01
This model is a fine-tuned version of [prajjwal1/bert-medium](https://huggingface.co/prajjwal1/bert-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.1
- Tokenizers 0.12.1
|
semy/hf-model-0 | 408de75147f5c2d7575a2a0ef7714e6382ddebeb | 2022-07-27T08:21:42.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | semy | null | semy/hf-model-0 | 27 | null | transformers | 7,493 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: hf-model-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hf-model-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7158
- Accuracy: 0.45
- F1: 0.45
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|
| 0.6107 | 1.0 | 12 | 0.7134 | 0.45 | 0.45 |
| 0.5364 | 2.0 | 24 | 0.7158 | 0.45 | 0.45 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
LDY/Question-Answering-Ican | f08b4e66020bee259736b1fcfe8703243a4a9073 | 2022-07-21T13:18:53.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | question-answering | false | LDY | null | LDY/Question-Answering-Ican | 27 | null | transformers | 7,494 | ---
license: afl-3.0
---
### Time: 2020/07/10
### ICAN-AI
|
Siyong/MC_RN | 86baeae8dd61c7b55548dcf380a77130a01f4642 | 2022-07-23T16:22:03.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Siyong | null | Siyong/MC_RN | 27 | null | transformers | 7,495 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Millad_Customer_RN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Millad_Customer_RN
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5635
- Wer: 0.8113
- Cer: 0.4817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 4000
- num_epochs: 600
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:-----:|:---------------:|:------:|:------:|
| 1.9257 | 13.33 | 2000 | 2.0606 | 0.9767 | 0.5500 |
| 1.4828 | 26.67 | 4000 | 2.1161 | 0.9019 | 0.4932 |
| 1.2582 | 40.0 | 6000 | 2.0589 | 0.8504 | 0.4942 |
| 0.9804 | 53.33 | 8000 | 2.4633 | 0.8745 | 0.4763 |
| 0.7862 | 66.67 | 10000 | 2.4794 | 0.8861 | 0.4944 |
| 0.6492 | 80.0 | 12000 | 2.8693 | 0.8554 | 0.4928 |
| 0.5375 | 93.33 | 14000 | 2.6125 | 0.8296 | 0.4802 |
| 0.4462 | 106.67 | 16000 | 2.7591 | 0.8770 | 0.4974 |
| 0.3873 | 120.0 | 18000 | 3.0325 | 0.8379 | 0.4800 |
| 0.3445 | 133.33 | 20000 | 2.9965 | 0.8761 | 0.4986 |
| 0.3087 | 146.67 | 22000 | 3.3437 | 0.8221 | 0.4923 |
| 0.2755 | 160.0 | 24000 | 3.3022 | 0.8803 | 0.5211 |
| 0.2467 | 173.33 | 26000 | 3.2348 | 0.8479 | 0.4933 |
| 0.2281 | 186.67 | 28000 | 3.8010 | 0.8695 | 0.5081 |
| 0.2119 | 200.0 | 30000 | 3.0446 | 0.8545 | 0.4902 |
| 0.194 | 213.33 | 32000 | 3.0873 | 0.8454 | 0.4840 |
| 0.1677 | 226.67 | 34000 | 3.6184 | 0.8645 | 0.5019 |
| 0.1642 | 240.0 | 36000 | 3.2480 | 0.8412 | 0.4903 |
| 0.1656 | 253.33 | 38000 | 3.4379 | 0.8362 | 0.4816 |
| 0.1371 | 266.67 | 40000 | 3.5117 | 0.8479 | 0.5040 |
| 0.1301 | 280.0 | 42000 | 3.4360 | 0.8404 | 0.4870 |
| 0.128 | 293.33 | 44000 | 3.6589 | 0.8537 | 0.4977 |
| 0.1152 | 306.67 | 46000 | 4.2359 | 0.8545 | 0.5051 |
| 0.1119 | 320.0 | 48000 | 3.5818 | 0.7980 | 0.4882 |
| 0.1026 | 333.33 | 50000 | 3.7618 | 0.8013 | 0.4865 |
| 0.0945 | 346.67 | 52000 | 4.2197 | 0.8404 | 0.5028 |
| 0.0962 | 360.0 | 54000 | 3.9231 | 0.8653 | 0.5030 |
| 0.088 | 373.33 | 56000 | 3.8400 | 0.8354 | 0.4914 |
| 0.0743 | 386.67 | 58000 | 3.4924 | 0.8088 | 0.4824 |
| 0.0811 | 400.0 | 60000 | 3.8370 | 0.8396 | 0.4861 |
| 0.0696 | 413.33 | 62000 | 4.2808 | 0.8412 | 0.5065 |
| 0.0692 | 426.67 | 64000 | 4.0161 | 0.8088 | 0.4744 |
| 0.0622 | 440.0 | 66000 | 3.9080 | 0.8163 | 0.4910 |
| 0.0591 | 453.33 | 68000 | 3.9838 | 0.8113 | 0.4823 |
| 0.0527 | 466.67 | 70000 | 3.8067 | 0.8329 | 0.4914 |
| 0.056 | 480.0 | 72000 | 4.1415 | 0.8096 | 0.4782 |
| 0.0535 | 493.33 | 74000 | 4.3350 | 0.8229 | 0.4828 |
| 0.0531 | 506.67 | 76000 | 3.9808 | 0.8071 | 0.4807 |
| 0.0451 | 520.0 | 78000 | 4.0301 | 0.7988 | 0.4816 |
| 0.044 | 533.33 | 80000 | 4.4680 | 0.8371 | 0.4921 |
| 0.0389 | 546.67 | 82000 | 4.1380 | 0.8121 | 0.4819 |
| 0.0392 | 560.0 | 84000 | 4.3910 | 0.7930 | 0.4763 |
| 0.0389 | 573.33 | 86000 | 4.5086 | 0.8055 | 0.4802 |
| 0.0355 | 586.67 | 88000 | 4.6259 | 0.8113 | 0.4821 |
| 0.0307 | 600.0 | 90000 | 4.5635 | 0.8113 | 0.4817 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
ivan-savchuk/msmarco-distilbert-dot-v5-tuned-full-v1 | 9277397c230dd0b31584f0a7a45a374a333d8bfa | 2022-07-28T12:14:51.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ivan-savchuk | null | ivan-savchuk/msmarco-distilbert-dot-v5-tuned-full-v1 | 27 | null | sentence-transformers | 7,496 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 3165 with parameters:
```
{'batch_size': 16}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 316,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
AigizK/wav2vec2-large-xls-r-300m-bashkir-cv7_opt | 7ec9fc83d13cf29fa7706ebd157f2e1c62affe4f | 2022-05-30T15:40:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ba",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | AigizK | null | AigizK/wav2vec2-large-xls-r-300m-bashkir-cv7_opt | 26 | null | transformers | 7,497 | ---
language:
- ba
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-large-xls-r-300m-bashkir-cv7_opt
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: ba
metrics:
- name: Test WER
type: wer
value: 0.04440795062008041
- name: "Test CER"
type: "cer"
value: 0.010491234992390509
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-bashkir-cv7_opt
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BA dataset.
It achieves the following results on the evaluation set:
- Training Loss: 0.268400
- Validation Loss: 0.088252
- WER without LM: 0.085588
- WER with LM: 0.04440795062008041
- CER with LM: 0.010491234992390509
## Model description
Trained with this [jupiter notebook](https://drive.google.com/file/d/1KohDXZtKBWXVPZYlsLtqfxJGBzKmTtSh/view?usp=sharing)
## Intended uses & limitations
In order to reduce the number of characters, the following letters have been replaced or removed:
- 'я' -> 'йа'
- 'ю' -> 'йу'
- 'ё' -> 'йо'
- 'е' -> 'йэ' for first letter
- 'е' -> 'э' for other cases
- 'ъ' -> deleted
- 'ь' -> deleted
Therefore, in order to get the correct text, you need to do the reverse transformation and use the language model.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 50
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu113
- Datasets 1.18.2
- Tokenizers 0.10.3
|
AlexMaclean/sentence-compression | d0bd05865437a846e4d309e470489c31d04b461a | 2021-12-04T08:10:24.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | AlexMaclean | null | AlexMaclean/sentence-compression | 26 | 1 | transformers | 7,498 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: sentence-compression
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentence-compression
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2973
- Accuracy: 0.8912
- F1: 0.8367
- Precision: 0.8495
- Recall: 0.8243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2686 | 1.0 | 10000 | 0.2667 | 0.8894 | 0.8283 | 0.8725 | 0.7884 |
| 0.2205 | 2.0 | 20000 | 0.2704 | 0.8925 | 0.8372 | 0.8579 | 0.8175 |
| 0.1476 | 3.0 | 30000 | 0.2973 | 0.8912 | 0.8367 | 0.8495 | 0.8243 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ArBert/bert-base-uncased-finetuned-ner-kmeans | 9c9906c07c06febf1f7e77ac72fa340dfe2785e7 | 2022-02-11T16:45:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ArBert | null | ArBert/bert-base-uncased-finetuned-ner-kmeans | 26 | null | transformers | 7,499 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-uncased-finetuned-ner-kmeans
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-ner-kmeans
This model is a fine-tuned version of [ArBert/bert-base-uncased-finetuned-ner](https://huggingface.co/ArBert/bert-base-uncased-finetuned-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1169
- Precision: 0.9084
- Recall: 0.9245
- F1: 0.9164
- Accuracy: 0.9792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.036 | 1.0 | 1123 | 0.1010 | 0.9086 | 0.9117 | 0.9101 | 0.9779 |
| 0.0214 | 2.0 | 2246 | 0.1094 | 0.9033 | 0.9199 | 0.9115 | 0.9784 |
| 0.014 | 3.0 | 3369 | 0.1169 | 0.9084 | 0.9245 | 0.9164 | 0.9792 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.