modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-22 00:45:16
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 491
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-22 00:44:03
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jacob-valdez/blenderbot-small-tflite | jacob-valdez | 2021-04-25T00:47:29Z | 0 | 1 | null | [
"tflite",
"Android",
"blenderbot",
"en",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: "en"
#thumbnail: "url to a thumbnail used in social sharing"
tags:
- Android
- tflite
- blenderbot
license: "apache-2.0"
#datasets:
#metrics:
---
# Model Card
`blenderbot-small-tflite` is a tflite version of `blenderbot-small-90M` I converted for my UTA CSE3310 class. See the repo at [https://github.com/kmosoti/DesparadosAEYE](https://github.com/kmosoti/DesparadosAEYE) and the conversion process [here](https://drive.google.com/file/d/1F93nMsDIm1TWhn70FcLtcaKQUynHq9wS/view?usp=sharing).
You have to right pad your user and model input integers to make them [32,]-shaped. Then indicate te true length with the 3rd and 4th params.
```python
display(interpreter.get_input_details())
display(interpreter.get_output_details())
```
```json
[{'dtype': numpy.int32,
'index': 0,
'name': 'input_tokens',
'quantization': (0.0, 0),
'quantization_parameters': {'quantized_dimension': 0,
'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32)},
'shape': array([32], dtype=int32),
'shape_signature': array([32], dtype=int32),
'sparsity_parameters': {}},
{'dtype': numpy.int32,
'index': 1,
'name': 'decoder_input_tokens',
'quantization': (0.0, 0),
'quantization_parameters': {'quantized_dimension': 0,
'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32)},
'shape': array([32], dtype=int32),
'shape_signature': array([32], dtype=int32),
'sparsity_parameters': {}},
{'dtype': numpy.int32,
'index': 2,
'name': 'input_len',
'quantization': (0.0, 0),
'quantization_parameters': {'quantized_dimension': 0,
'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32)},
'shape': array([], dtype=int32),
'shape_signature': array([], dtype=int32),
'sparsity_parameters': {}},
{'dtype': numpy.int32,
'index': 3,
'name': 'decoder_input_len',
'quantization': (0.0, 0),
'quantization_parameters': {'quantized_dimension': 0,
'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32)},
'shape': array([], dtype=int32),
'shape_signature': array([], dtype=int32),
'sparsity_parameters': {}}]
[{'dtype': numpy.int32,
'index': 3113,
'name': 'Identity',
'quantization': (0.0, 0),
'quantization_parameters': {'quantized_dimension': 0,
'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32)},
'shape': array([1], dtype=int32),
'shape_signature': array([1], dtype=int32),
'sparsity_parameters': {}}]
``` |
glasses/cse_resnet50 | glasses | 2021-04-24T10:50:58Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"arxiv:1512.03385",
"arxiv:1812.01187",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | # cse_resnet50
Implementation of ResNet proposed in [Deep Residual Learning for Image
Recognition](https://arxiv.org/abs/1512.03385)
``` python
ResNet.resnet18()
ResNet.resnet26()
ResNet.resnet34()
ResNet.resnet50()
ResNet.resnet101()
ResNet.resnet152()
ResNet.resnet200()
Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_
ResNet.resnet26d()
ResNet.resnet34d()
ResNet.resnet50d()
# You can construct your own one by chaning `stem` and `block`
resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD))
```
Examples:
``` python
# change activation
ResNet.resnet18(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNet.resnet18(n_classes=100)
# pass a different block
ResNet.resnet18(block=SENetBasicBlock)
# change the steam
model = ResNet.resnet18(stem=ResNetStemC)
change shortcut
model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = ResNet.resnet18()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
spencerh/leftpartisan | spencerh | 2021-04-23T19:27:15Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | # Text classifier using DistilBERT to determine Partisanship
## This is one of many single-class partisanship models
label_0 refers to "left" while label_1 refers to "other".
This model was trained on 40,000 articles.
### Best Practices
This model was optimized for 512 token-length text. Any text below 150 tokens will result in inaccurate results. |
spencerh/rightpartisan | spencerh | 2021-04-23T19:26:52Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | # Text classifier using DistilBERT to determine Partisanship
## This is one of the single-class partisan detecting models. (see leftpartisan/leftcenterpartisan/rightcenterpartisan/centerpartisan)
label_0 refers to "other" while label_1 refers to "right" (right as in right-leaning).
This was trained with 40,000 articles.
### Best Practices
This model was optimized for 512 token-length text. Any text below 150 tokens will result in inaccurate results. |
glasses/deit_tiny_patch16_224 | glasses | 2021-04-22T18:44:18Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"arxiv:2010.11929",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | # deit_tiny_patch16_224
Implementation of DeiT proposed in [Training data-efficient image
transformers & distillation through
attention](https://arxiv.org/pdf/2010.11929.pdf)
An attention based distillation is proposed where a new token is added
to the model, the [dist]{.title-ref} token.

``` {.sourceCode .}
DeiT.deit_tiny_patch16_224()
DeiT.deit_small_patch16_224()
DeiT.deit_base_patch16_224()
DeiT.deit_base_patch16_384()
```
|
glasses/vit_large_patch16_384 | glasses | 2021-04-22T18:43:25Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"arxiv:2010.11929",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | # vit_large_patch16_384
Implementation of Vision Transformer (ViT) proposed in [An Image Is
Worth 16x16 Words: Transformers For Image Recognition At
Scale](https://arxiv.org/pdf/2010.11929.pdf)
The following image from the authors shows the architecture.

``` python
ViT.vit_small_patch16_224()
ViT.vit_base_patch16_224()
ViT.vit_base_patch16_384()
ViT.vit_base_patch32_384()
ViT.vit_huge_patch16_224()
ViT.vit_huge_patch32_384()
ViT.vit_large_patch16_224()
ViT.vit_large_patch16_384()
ViT.vit_large_patch32_384()
```
Examples:
``` python
# change activation
ViT.vit_base_patch16_224(activation = nn.SELU)
# change number of classes (default is 1000 )
ViT.vit_base_patch16_224(n_classes=100)
# pass a different block, default is TransformerEncoderBlock
ViT.vit_base_patch16_224(block=MyCoolTransformerBlock)
# get features
model = ViT.vit_base_patch16_224
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[[torch.Size([1, 197, 768]), torch.Size([1, 197, 768]), ...]
# change the tokens, you have to subclass ViTTokens
class MyTokens(ViTTokens):
def __init__(self, emb_size: int):
super().__init__(emb_size)
self.my_new_token = nn.Parameter(torch.randn(1, 1, emb_size))
ViT(tokens=MyTokens)
```
|
glasses/vit_large_patch16_224 | glasses | 2021-04-22T18:42:35Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"arxiv:2010.11929",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | # vit_large_patch16_224
Implementation of Vision Transformer (ViT) proposed in [An Image Is
Worth 16x16 Words: Transformers For Image Recognition At
Scale](https://arxiv.org/pdf/2010.11929.pdf)
The following image from the authors shows the architecture.

``` python
ViT.vit_small_patch16_224()
ViT.vit_base_patch16_224()
ViT.vit_base_patch16_384()
ViT.vit_base_patch32_384()
ViT.vit_huge_patch16_224()
ViT.vit_huge_patch32_384()
ViT.vit_large_patch16_224()
ViT.vit_large_patch16_384()
ViT.vit_large_patch32_384()
```
Examples:
``` python
# change activation
ViT.vit_base_patch16_224(activation = nn.SELU)
# change number of classes (default is 1000 )
ViT.vit_base_patch16_224(n_classes=100)
# pass a different block, default is TransformerEncoderBlock
ViT.vit_base_patch16_224(block=MyCoolTransformerBlock)
# get features
model = ViT.vit_base_patch16_224
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[[torch.Size([1, 197, 768]), torch.Size([1, 197, 768]), ...]
# change the tokens, you have to subclass ViTTokens
class MyTokens(ViTTokens):
def __init__(self, emb_size: int):
super().__init__(emb_size)
self.my_new_token = nn.Parameter(torch.randn(1, 1, emb_size))
ViT(tokens=MyTokens)
```
|
glasses/vit_huge_patch32_384 | glasses | 2021-04-22T18:41:37Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"arxiv:2010.11929",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | # vit_huge_patch32_384
Implementation of Vision Transformer (ViT) proposed in [An Image Is
Worth 16x16 Words: Transformers For Image Recognition At
Scale](https://arxiv.org/pdf/2010.11929.pdf)
The following image from the authors shows the architecture.

``` python
ViT.vit_small_patch16_224()
ViT.vit_base_patch16_224()
ViT.vit_base_patch16_384()
ViT.vit_base_patch32_384()
ViT.vit_huge_patch16_224()
ViT.vit_huge_patch32_384()
ViT.vit_large_patch16_224()
ViT.vit_large_patch16_384()
ViT.vit_large_patch32_384()
```
Examples:
``` python
# change activation
ViT.vit_base_patch16_224(activation = nn.SELU)
# change number of classes (default is 1000 )
ViT.vit_base_patch16_224(n_classes=100)
# pass a different block, default is TransformerEncoderBlock
ViT.vit_base_patch16_224(block=MyCoolTransformerBlock)
# get features
model = ViT.vit_base_patch16_224
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[[torch.Size([1, 197, 768]), torch.Size([1, 197, 768]), ...]
# change the tokens, you have to subclass ViTTokens
class MyTokens(ViTTokens):
def __init__(self, emb_size: int):
super().__init__(emb_size)
self.my_new_token = nn.Parameter(torch.randn(1, 1, emb_size))
ViT(tokens=MyTokens)
```
|
glasses/vit_huge_patch16_224 | glasses | 2021-04-22T18:39:36Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"arxiv:2010.11929",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | # vit_huge_patch16_224
Implementation of Vision Transformer (ViT) proposed in [An Image Is
Worth 16x16 Words: Transformers For Image Recognition At
Scale](https://arxiv.org/pdf/2010.11929.pdf)
The following image from the authors shows the architecture.

``` python
ViT.vit_small_patch16_224()
ViT.vit_base_patch16_224()
ViT.vit_base_patch16_384()
ViT.vit_base_patch32_384()
ViT.vit_huge_patch16_224()
ViT.vit_huge_patch32_384()
ViT.vit_large_patch16_224()
ViT.vit_large_patch16_384()
ViT.vit_large_patch32_384()
```
Examples:
``` python
# change activation
ViT.vit_base_patch16_224(activation = nn.SELU)
# change number of classes (default is 1000 )
ViT.vit_base_patch16_224(n_classes=100)
# pass a different block, default is TransformerEncoderBlock
ViT.vit_base_patch16_224(block=MyCoolTransformerBlock)
# get features
model = ViT.vit_base_patch16_224
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[[torch.Size([1, 197, 768]), torch.Size([1, 197, 768]), ...]
# change the tokens, you have to subclass ViTTokens
class MyTokens(ViTTokens):
def __init__(self, emb_size: int):
super().__init__(emb_size)
self.my_new_token = nn.Parameter(torch.randn(1, 1, emb_size))
ViT(tokens=MyTokens)
```
|
k948181/ybdH-1 | k948181 | 2021-04-22T13:34:20Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | >tr|Q8ZR27|Q8ZR27_SALTY Putative glycerol dehydrogenase OS=Salmonella typhimurium (strain LT2 / SGSC1412 / ATCC 700720) OX=99287 GN=ybdH PE=3 SV=1
MNHTEIRVVTGPANYFSHAGSLERLTDFFTPEQLSHAVWVYGERAIAAARPYLPEAFERA
GAKHLPFTGHCSERHVAQLAHACNDDRQVVIGVGGGALLDTAKALARRLALPFVAIPTIA
ATCAAWTPLSVWYNDAGQALQFEIFDDANFLVLVEPRIILQAPDDYLLAGIGDTLAKWYE
AVVLAPQPETLPLTVRLGINSACAIRDLLLDSSEQALADKQQRRLTQAFCDVVDAIIAGG
GMVGGLGERYTRVAAAHAVHNGLTVLPQTEKFLHGTKVAYGILVQSALLGQDDVLAQLIT
AYRRFHLPARLSELDVDIHNTAEIDRVIAHTLRPVESIHYLPVTLTPDTLRAAFEKVEFF
RI
|
glasses/dummy | glasses | 2021-04-21T18:24:15Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"arxiv:1512.03385",
"arxiv:1812.01187",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | # ResNet
Implementation of ResNet proposed in [Deep Residual Learning for Image
Recognition](https://arxiv.org/abs/1512.03385)
``` python
ResNet.resnet18()
ResNet.resnet26()
ResNet.resnet34()
ResNet.resnet50()
ResNet.resnet101()
ResNet.resnet152()
ResNet.resnet200()
Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_
ResNet.resnet26d()
ResNet.resnet34d()
ResNet.resnet50d()
# You can construct your own one by chaning `stem` and `block`
resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD))
```
Examples:
``` python
# change activation
ResNet.resnet18(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNet.resnet18(n_classes=100)
# pass a different block
ResNet.resnet18(block=SENetBasicBlock)
# change the steam
model = ResNet.resnet18(stem=ResNetStemC)
change shortcut
model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = ResNet.resnet18()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
ahmedabdelali/bert-base-qarib_far_8280k | ahmedabdelali | 2021-04-21T13:40:36Z | 20 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"QARiB",
"qarib",
"ar",
"dataset:arabic_billion_words",
"dataset:open_subtitles",
"dataset:twitter",
"dataset:Farasa",
"arxiv:2102.10684",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: ar
tags:
- pytorch
- tf
- QARiB
- qarib
datasets:
- arabic_billion_words
- open_subtitles
- twitter
- Farasa
metrics:
- f1
widget:
- text: "و+قام ال+مدير [MASK]"
---
# QARiB: QCRI Arabic and Dialectal BERT
## About QARiB Farasa
QCRI Arabic and Dialectal BERT (QARiB) model, was trained on a collection of ~ 420 Million tweets and ~ 180 Million sentences of text.
For the tweets, the data was collected using twitter API and using language filter. `lang:ar`. For the text data, it was a combination from
[Arabic GigaWord](url), [Abulkhair Arabic Corpus]() and [OPUS](http://opus.nlpl.eu/).
QARiB: Is the Arabic name for "Boat".
## Model and Parameters:
- Data size: 14B tokens
- Vocabulary: 64k
- Iterations: 10M
- Number of Layers: 12
## Training QARiB
See details in [Training QARiB](https://github.com/qcri/QARIB/Training_QARiB.md)
## Using QARiB
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. For more details, see [Using QARiB](https://github.com/qcri/QARIB/Using_QARiB.md)
This model expects the data to be segmented. You may use [Farasa Segmenter](https://farasa-api.qcri.org/segmentation/) API.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>>from transformers import pipeline
>>>fill_mask = pipeline("fill-mask", model="./models/bert-base-qarib_far")
>>> fill_mask("و+قام ال+مدير [MASK]")
[
]
>>> fill_mask("و+قام+ت ال+مدير+ة [MASK]")
[
]
>>> fill_mask("قللي وشفيييك يرحم [MASK]")
[
]
```
## Evaluations:
|**Experiment** |**mBERT**|**AraBERT0.1**|**AraBERT1.0**|**ArabicBERT**|**QARiB**|
|---------------|---------|--------------|--------------|--------------|---------|
|Dialect Identification | 6.06% | 59.92% | 59.85% | 61.70% | **65.21%** |
|Emotion Detection | 27.90% | 43.89% | 42.37% | 41.65% | **44.35%** |
|Named-Entity Recognition (NER) | 49.38% | 64.97% | **66.63%** | 64.04% | 61.62% |
|Offensive Language Detection | 83.14% | 88.07% | 88.97% | 88.19% | **91.94%** |
|Sentiment Analysis | 86.61% | 90.80% | **93.58%** | 83.27% | 93.31% |
## Model Weights and Vocab Download
From Huggingface site: https://huggingface.co/qarib/bert-base-qarib_far
## Contacts
Ahmed Abdelali, Sabit Hassan, Hamdy Mubarak, Kareem Darwish and Younes Samih
## Reference
```
@article{abdelali2021pretraining,
title={Pre-Training BERT on Arabic Tweets: Practical Considerations},
author={Ahmed Abdelali and Sabit Hassan and Hamdy Mubarak and Kareem Darwish and Younes Samih},
year={2021},
eprint={2102.10684},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
ahmedabdelali/bert-base-qarib_far_9920k | ahmedabdelali | 2021-04-21T13:38:28Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"QARiB",
"qarib",
"ar",
"dataset:arabic_billion_words",
"dataset:open_subtitles",
"dataset:twitter",
"dataset:Farasa",
"arxiv:2102.10684",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: ar
tags:
- pytorch
- tf
- QARiB
- qarib
datasets:
- arabic_billion_words
- open_subtitles
- twitter
- Farasa
metrics:
- f1
widget:
- text: "و+قام ال+مدير [MASK]"
---
# QARiB: QCRI Arabic and Dialectal BERT
## About QARiB Farasa
QCRI Arabic and Dialectal BERT (QARiB) model, was trained on a collection of ~ 420 Million tweets and ~ 180 Million sentences of text.
For the tweets, the data was collected using twitter API and using language filter. `lang:ar`. For the text data, it was a combination from
[Arabic GigaWord](url), [Abulkhair Arabic Corpus]() and [OPUS](http://opus.nlpl.eu/).
QARiB: Is the Arabic name for "Boat".
## Model and Parameters:
- Data size: 14B tokens
- Vocabulary: 64k
- Iterations: 10M
- Number of Layers: 12
## Training QARiB
See details in [Training QARiB](https://github.com/qcri/QARIB/Training_QARiB.md)
## Using QARiB
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. For more details, see [Using QARiB](https://github.com/qcri/QARIB/Using_QARiB.md)
This model expects the data to be segmented. You may use [Farasa Segmenter](https://farasa-api.qcri.org/segmentation/) API.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>>from transformers import pipeline
>>>fill_mask = pipeline("fill-mask", model="./models/bert-base-qarib_far")
>>> fill_mask("و+قام ال+مدير [MASK]")
[
]
>>> fill_mask("و+قام+ت ال+مدير+ة [MASK]")
[
]
>>> fill_mask("قللي وشفيييك يرحم [MASK]")
[
]
```
## Evaluations:
|**Experiment** |**mBERT**|**AraBERT0.1**|**AraBERT1.0**|**ArabicBERT**|**QARiB**|
|---------------|---------|--------------|--------------|--------------|---------|
|Dialect Identification | 6.06% | 59.92% | 59.85% | 61.70% | **65.21%** |
|Emotion Detection | 27.90% | 43.89% | 42.37% | 41.65% | **44.35%** |
|Named-Entity Recognition (NER) | 49.38% | 64.97% | **66.63%** | 64.04% | 61.62% |
|Offensive Language Detection | 83.14% | 88.07% | 88.97% | 88.19% | **91.94%** |
|Sentiment Analysis | 86.61% | 90.80% | **93.58%** | 83.27% | 93.31% |
## Model Weights and Vocab Download
From Huggingface site: https://huggingface.co/qarib/bert-base-qarib_far
## Contacts
Ahmed Abdelali, Sabit Hassan, Hamdy Mubarak, Kareem Darwish and Younes Samih
## Reference
```
@article{abdelali2021pretraining,
title={Pre-Training BERT on Arabic Tweets: Practical Considerations},
author={Ahmed Abdelali and Sabit Hassan and Hamdy Mubarak and Kareem Darwish and Younes Samih},
year={2021},
eprint={2102.10684},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
SajjadAyoubi/xlm-roberta-large-fa-qa | SajjadAyoubi | 2021-04-21T07:23:30Z | 36 | 6 | transformers | [
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ### How to use
#### Requirements
Transformers require `transformers` and `sentencepiece`, both of which can be
installed using `pip`.
```sh
pip install transformers sentencepiece
```
#### Pipelines 🚀
In case you are not familiar with Transformers, you can use pipelines instead.
Note that, pipelines can't have _no answer_ for the questions.
```python
from transformers import pipeline
model_name = "SajjadAyoubi/lm-roberta-large-fa-qa"
qa_pipeline = pipeline("question-answering", model=model_name, tokenizer=model_name)
text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم"
questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"]
for question in questions:
print(qa_pipeline({"context": text, "question": question}))
>>> {'score': 0.4839823544025421, 'start': 8, 'end': 18, 'answer': 'سجاد ایوبی'}
>>> {'score': 0.3747948706150055, 'start': 24, 'end': 32, 'answer': '۲۰ سالمه'}
>>> {'score': 0.5945395827293396, 'start': 38, 'end': 55, 'answer': 'پردازش زبان طبیعی'}
```
#### Manual approach 🔥
Using the Manual approach, it is possible to have _no answer_ with even better
performance.
- PyTorch
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
from src.utils import AnswerPredictor
model_name = "SajjadAyoubi/lm-roberta-large-fa-qa"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم"
questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"]
# this class is from src/utils.py and you can read more about it
predictor = AnswerPredictor(model, tokenizer, device="cpu", n_best=10)
preds = predictor(questions, [text] * 3, batch_size=3)
for k, v in preds.items():
print(v)
```
Produces an output such below:
```
100%|██████████| 1/1 [00:00<00:00, 3.56it/s]
{'score': 8.040637016296387, 'text': 'سجاد ایوبی'}
{'score': 9.901972770690918, 'text': '۲۰'}
{'score': 12.117212295532227, 'text': 'پردازش زبان طبیعی'}
```
- TensorFlow 2.X
```python
from transformers import AutoTokenizer, TFAutoModelForQuestionAnswering
from src.utils import TFAnswerPredictor
model_name = "SajjadAyoubi/lm-roberta-large-fa-qa"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = TFAutoModelForQuestionAnswering.from_pretrained(model_name)
text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم"
questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"]
# this class is from src/utils.py, you can read more about it
predictor = TFAnswerPredictor(model, tokenizer, n_best=10)
preds = predictor(questions, [text] * 3, batch_size=3)
for k, v in preds.items():
print(v)
```
Produces an output such below:
```text
100%|██████████| 1/1 [00:00<00:00, 3.56it/s]
{'score': 8.040637016296387, 'text': 'سجاد ایوبی'}
{'score': 9.901972770690918, 'text': '۲۰'}
{'score': 12.117212295532227, 'text': 'پردازش زبان طبیعی'}
```
Or you can access the whole demonstration using [HowToUse iPython Notebook on Google Colab](https://colab.research.google.com/github/sajjjadayobi/PersianQA/blob/main/notebooks/HowToUse.ipynb)
|
stas/t5-very-small-random | stas | 2021-04-21T02:34:01Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | This is a tiny random t5 model used for testing
See `t5-make-very-small-model.py` for how it was created. |
castorini/ance-dpr-question-multi | castorini | 2021-04-21T01:36:24Z | 143 | 1 | transformers | [
"transformers",
"pytorch",
"dpr",
"feature-extraction",
"arxiv:2007.00808",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | This model is converted from the original ANCE [repo](https://github.com/microsoft/ANCE) and fitted into Pyserini:
> Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. [Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval](https://arxiv.org/pdf/2007.00808.pdf)
For more details on how to use it, check our experiments in [Pyserini](https://github.com/castorini/pyserini/blob/master/docs/experiments-ance.md)
|
Davlan/mT5_base_yoruba_adr | Davlan | 2021-04-20T21:16:26Z | 24 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"arxiv:2003.10564",
"arxiv:2103.08647",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | Hugging Face's logo
---
language: yo
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# mT5_base_yoruba_adr
## Model description
**mT5_base_yoruba_adr** is a **automatic diacritics restoration** model for Yorùbá language based on a fine-tuned mT5-base model. It achieves the **state-of-the-art performance** for adding the correct diacritics or tonal marks to Yorùbá texts.
Specifically, this model is a *mT5_base* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for ADR.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("")
model = AutoModelForTokenClassification.from_pretrained("")
nlp = pipeline("", model=model, tokenizer=tokenizer)
example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
64.63 BLEU on [Global Voices test set](https://arxiv.org/abs/2003.10564)
70.27 BLEU on [Menyo-20k test set](https://arxiv.org/abs/2103.08647)
### BibTeX entry and citation info
By Jesujoba Alabi and David Adelani
```
```
|
skylord/wav2vec2-large-xlsr-hindi | skylord | 2021-04-20T07:24:00Z | 19 | 2 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: hi
datasets:
- common_voice
- indic tts
- iiith
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Hindi XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
- name: Common Voice hi
type: common_voice
args: hi
- name: Indic IIT (IITM)
type: indic
args: hi
- name: IIITH Indic Dataset
type: iiith
args: hi
metrics:
- name: Custom Dataset Hindi WER
type: wer
value: 17.23
- name: CommonVoice Hindi (Test) WER
type: wer
value: 56.46
---
# Wav2Vec2-Large-XLSR-53-Hindi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Hindi using the following datasets:
- [Common Voice](https://huggingface.co/datasets/common_voice),
- [Indic TTS- IITM](https://www.iitm.ac.in/donlab/tts/index.php) and
- [IIITH - Indic Speech Datasets](http://speech.iiit.ac.in/index.php/research-svl/69.html)
The Indic datasets are well balanced across gender and accents. However the CommonVoice dataset is skewed towards male voices
Fine-tuned on facebook/wav2vec2-large-xlsr-53 using Hindi dataset :: 60 epochs >> 17.05% WER
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hi", split="test")
processor = Wav2Vec2Processor.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
model = Wav2Vec2ForCTC.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Predictions
*Some good ones ..... *
| Predictions | Reference |
|-------|-------|
|फिर वो सूरज तारे पहाड बारिश पदछड़ दिन रात शाम नदी बर्फ़ समुद्र धुंध हवा कुछ भी हो सकती है | फिर वो सूरज तारे पहाड़ बारिश पतझड़ दिन रात शाम नदी बर्फ़ समुद्र धुंध हवा कुछ भी हो सकती है |
| इस कारण जंगल में बडी दूर स्थित राघव के आश्रम में लोघ कम आने लगे और अधिकांश भक्त सुंदर के आश्रम में जाने लगे | इस कारण जंगल में बड़ी दूर स्थित राघव के आश्रम में लोग कम आने लगे और अधिकांश भक्त सुन्दर के आश्रम में जाने लगे |
| अपने बचन के अनुसार शुभमूर्त पर अनंत दक्षिणी पर्वत गया और मंत्रों का जप करके सरोवर में उतरा | अपने बचन के अनुसार शुभमुहूर्त पर अनंत दक्षिणी पर्वत गया और मंत्रों का जप करके सरोवर में उतरा |
*Some crappy stuff .... *
| Predictions | Reference |
|-------|-------|
| वस गनिल साफ़ है। | उसका दिल साफ़ है। |
| चाय वा एक कुछ लैंगे हब | चायवाय कुछ लेंगे आप |
| टॉम आधे है स्कूल हें है | टॉम अभी भी स्कूल में है |
## Evaluation
The model can be evaluated as follows on the following two datasets:
1. Custom dataset created from 20% of Indic, IIITH and CV (test): WER 17.xx%
2. CommonVoice Hindi test dataset: WER 56.xx%
Links to the datasets are provided above (check the links at the start of the README)
train-test csv files are shared on the following gdrive links:
a. IIITH [train](https://storage.googleapis.com/indic-dataset/train_test_splits/iiit_hi_train.csv) [test](https://storage.googleapis.com/indic-dataset/train_test_splits/iiit_hi_test.csv)
b. Indic TTS [train](https://storage.googleapis.com/indic-dataset/train_test_splits/indic_train_full.csv) [test](https://storage.googleapis.com/indic-dataset/train_test_splits/indic_test_full.csv)
Update the audio_path as per your local file structure.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
## Load the datasets
test_dataset = load_dataset("common_voice", "hi", split="test")
indic = load_dataset("csv", data_files= {'train':"/workspace/data/hi2/indic_train_full.csv",
"test": "/workspace/data/hi2/indic_test_full.csv"}, download_mode="force_redownload")
iiith = load_dataset("csv", data_files= {"train": "/workspace/data/hi2/iiit_hi_train.csv",
"test": "/workspace/data/hi2/iiit_hi_test.csv"}, download_mode="force_redownload")
## Pre-process datasets and concatenate to create test dataset
# Drop columns of common_voice
split = ['train', 'test', 'validation', 'other', 'invalidated']
for sp in split:
common_voice[sp] = common_voice[sp].remove_columns(['client_id', 'up_votes', 'down_votes', 'age', 'gender', 'accent', 'locale', 'segment'])
common_voice = common_voice.rename_column('path', 'audio_path')
common_voice = common_voice.rename_column('sentence', 'target_text')
train_dataset = datasets.concatenate_datasets([indic['train'], iiith['train'], common_voice['train']])
test_dataset = datasets.concatenate_datasets([indic['test'], iiith['test'], common_voice['test'], common_voice['validation']])
## Load model from HF hub
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
model = Wav2Vec2ForCTC.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\'\;\:\"\“\%\‘\”\�Utrnle\_]'
unicode_ignore_regex = r'[dceMaWpmFui\xa0\u200d]' # Some unwanted unicode chars
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["target_text"] = re.sub(chars_to_ignore_regex, '', batch["target_text"])
batch["target_text"] = re.sub(unicode_ignore_regex, '', batch["target_text"])
speech_array, sampling_rate = torchaudio.load(batch["audio_path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result on custom dataset**: 17.23 %
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "hi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
model = Wav2Vec2ForCTC.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\'\;\:\"\“\%\‘\”\�Utrnle\_]'
unicode_ignore_regex = r'[dceMaWpmFui\xa0\u200d]' # Some unwanted unicode chars
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).sub(unicode_ignore_regex, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result on CommonVoice**: 56.46 %
## Training
The Common Voice `train`, `validation`, datasets were used for training as well as
The script used for training & wandb dashboard can be found [here](https://wandb.ai/thinkevolve/huggingface/reports/Project-Hindi-XLSR-Large--Vmlldzo2MTI2MTQ)
|
tanmaylaud/wav2vec2-large-xlsr-hindi-marathi | tanmaylaud | 2021-04-19T18:40:07Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hindi",
"marathi",
"mr",
"hi",
"dataset:openslr",
"dataset:interspeech_2021_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: [mr,hi]
datasets:
- openslr
- interspeech_2021_asr
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- hindi
- marathi
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53 Hindi-Marathi by Tanmay Laud
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR hi, OpenSLR mr
type: openslr, interspeech_2021_asr
metrics:
- name: Test WER
type: wer
value: 23.736641
---
# Wav2Vec2-Large-XLSR-53-Hindi-Marathi
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Hindi and Marathi using the OpenSLR SLR64 datasets. When using this model, make sure that your speech input is sampled at 16kHz.
## Installation
```bash
pip install git+https://github.com/huggingface/transformers.git datasets librosa torch==1.7.0 torchaudio==0.7.0 jiwer
```
## Eval dataset:
```bash
wget https://www.openslr.org/resources/103/Marathi_test.zip -P data/marathi
unzip -P "K3[2?do9" data/marathi/Marathi_test.zip -d data/marathi/.
tar -xzf data/marathi/Marathi_test.tar.gz -C data/marathi/.
wget https://www.openslr.org/resources/103/Hindi_test.zip -P data/hindi
unzip -P "w9I2{3B*" data/hindi/Hindi_test.zip -d data/hindi/.
tar -xzf data/hindi/Hindi_test.tar.gz -C data/hindi/.
wget -O test.csv 'https://filebin.net/snrz6bt13usv8w2e/test_large.csv?t=ps3n99ho'
#If download does not work, paste this link in browser: https://filebin.net/snrz6bt13usv8w2e/test_large.csv
```
## Usage
The model can be used directly (without a language model) as follows, assuming you have a dataset with Marathi text and path fields:
```python
import torch
import torchaudio
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_metric, Dataset
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained('tanmaylaud/wav2vec2-large-xlsr-hindi-marathi')
model = Wav2Vec2ForCTC.from_pretrained('tanmaylaud/wav2vec2-large-xlsr-hindi-marathi').to("cuda")
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = speech_array[0].numpy()
batch["sampling_rate"] = sampling_rate
batch["target_text"] = batch["sentence"]
batch["speech"] = librosa.resample(np.asarray(batch["speech"]), sampling_rate, 16_000)
batch["sampling_rate"] = 16_000
return batch
test_data= test_data.map(speech_file_to_array_fn)
inputs = processor(test_data["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_data["text"][:2])
```
# Code For Evaluation on OpenSLR (Hindi + Marathi : https://filebin.net/snrz6bt13usv8w2e/test_large.csv)
```python
import torchaudio
import torch
import librosa
import numpy as np
import re
test = Dataset.from_csv('test.csv')
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\“\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\‘\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\”\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\�\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\।]'
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = speech_array[0].numpy()
batch["sampling_rate"] = sampling_rate
batch["target_text"] = batch["sentence"]
batch["speech"] = librosa.resample(np.asarray(batch["speech"]), sampling_rate, 16_000)
batch["sampling_rate"] = 16_000
return batch
test= test.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
# we do not want to group tokens when computing the metrics
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
test = test.map(evaluate, batched=True, batch_size=32)
print("WER: {:2f}".format(100 * wer.compute(predictions=test["pred_strings"], references=test["sentence"])))
```
#### Code for Evaluation on Common Voice Hindi (Common voice does not have Marathi yet)
```python
import torchaudio
import torch
import librosa
import numpy as np
import re
from datasets import load_metric, load_dataset, Dataset
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained('tanmaylaud/wav2vec2-large-xlsr-hindi-marathi')
model = Wav2Vec2ForCTC.from_pretrained('tanmaylaud/wav2vec2-large-xlsr-hindi-marathi').to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\“\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\‘\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\”\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\�\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\।]'
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = speech_array[0].numpy()
batch["sampling_rate"] = sampling_rate
batch["target_text"] = batch["sentence"]
batch["speech"] = librosa.resample(np.asarray(batch["speech"]), sampling_rate, 16_000)
batch["sampling_rate"] = 16_000
return batch
#Run prediction on batch
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
# we do not want to group tokens when computing the metrics
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
test_data = load_dataset("common_voice", "hi", split="test")
test_data = test_data.map(speech_file_to_array_fn)
test_data = test_data.map(evaluate, batched=True, batch_size=32)
print("WER: {:2f}".format(100 * wer.compute(predictions=test_data["pred_strings"],
references=test_data["sentence"])))
```
Link to eval notebook : https://colab.research.google.com/drive/1nZRTgKfxCD9cvy90wikTHkg2il3zgcqW#scrollTo=cXWFbhb0d7DT
WER : 23.736641% (OpenSLR Hindi+Marathi Test set : https://filebin.net/snrz6bt13usv8w2e/test_large.csv)
WER: 44.083527% (Common Voice Hindi Test Split) |
Pollawat/mt5-small-thai-qa-qg | Pollawat | 2021-04-19T14:52:22Z | 38 | 4 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question-generation",
"question-answering",
"dataset:NSC2018",
"dataset:iapp-wiki-qa-dataset",
"dataset:XQuAD",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
tags:
- question-generation
- question-answering
language:
- thai
- th
datasets:
- NSC2018
- iapp-wiki-qa-dataset
- XQuAD
license: mit
---
[Google's mT5](https://github.com/google-research/multilingual-t5)
This is a model for generating questions from Thai texts. It was fine-tuned on NSC2018 corpus
```python
from transformers import MT5Tokenizer, MT5ForConditionalGeneration
tokenizer = MT5Tokenizer.from_pretrained("Pollawat/mt5-small-thai-qa-qg")
model = MT5ForConditionalGeneration.from_pretrained("Pollawat/mt5-small-thai-qa-qg")
text = "กรุงเทพมหานคร เป็นเมืองหลวงและนครที่มีประชากรมากที่สุดของประเทศไทย เป็นศูนย์กลางการปกครอง การศึกษา การคมนาคมขนส่ง การเงินการธนาคาร การพาณิชย์ การสื่อสาร และความเจริญของประเทศ เป็นเมืองที่มีชื่อยาวที่สุดในโลก ตั้งอยู่บนสามเหลี่ยมปากแม่น้ำเจ้าพระยา มีแม่น้ำเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งพระนครและฝั่งธนบุรี กรุงเทพมหานครมีพื้นที่ทั้งหมด 1,568.737 ตร.กม. มีประชากรตามทะเบียนราษฎรกว่า 5 ล้านคน"
input_ids = tokenizer.encode(text, return_tensors='pt')
beam_output = model.generate(
input_ids,
max_length=50,
num_beams=5,
early_stopping=True
)
print(tokenizer.decode(beam_output[0]))
>> <pad> <extra_id_0> แม่น้ําเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งใด <ANS> ฝั่งพระนครและฝั่งธนบุรี</s>
print(tokenizer.decode(beam_output[0], skip_special_tokens=True))
>> <extra_id_0> แม่น้ําเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งใด ฝั่งพระนครและฝั่งธนบุรี
``` |
google/mobilebert-uncased | google | 2021-04-19T13:32:58Z | 180,465 | 48 | transformers | [
"transformers",
"pytorch",
"tf",
"rust",
"mobilebert",
"pretraining",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://huggingface.co/front/thumbnails/google.png
license: apache-2.0
---
## MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices
MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance
between self-attentions and feed-forward networks.
This checkpoint is the original MobileBert Optimized Uncased English:
[uncased_L-24_H-128_B-512_A-4_F-4_OPT](https://storage.googleapis.com/cloud-tpu-checkpoints/mobilebert/uncased_L-24_H-128_B-512_A-4_F-4_OPT.tar.gz)
checkpoint.
## How to use MobileBERT in `transformers`
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="google/mobilebert-uncased",
tokenizer="google/mobilebert-uncased"
)
print(
fill_mask(f"HuggingFace is creating a {fill_mask.tokenizer.mask_token} that the community uses to solve NLP tasks.")
)
```
|
shivam/mbart-large-50-finetuned-en-mr | shivam | 2021-04-18T10:19:52Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
Language Pair Finetuned:
- en-mr
Metrics:
- sacrebleu
- WAT 2021: 16.11
# mbart-large-finetuned-en-mr
## Model Description
This is the mbart-large-50 model finetuned on En-Mr corpus.
## Intended uses and limitations
Mostly useful for English to Marathi translation but the mbart-large-50 model also supports other language pairs
### How to use
```python
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
model = MBartForConditionalGeneration.from_pretrained("shivam/mbart-large-50-finetuned-en-mr")
tokenizer = MBart50TokenizerFast.from_pretrained("shivam/mbart-large-50-finetuned-en-mr", src_lang="en_XX", tgt_lang="mr_IN")
english_input_sentence = "The Prime Minister said that cleanliness, or Swachhta, is one of the most important aspects of preventive healthcare."
model_inputs = tokenizer(english_input_sentence, return_tensors="pt")
generated_tokens = model.generate(
**model_inputs,
forced_bos_token_id=tokenizer.lang_code_to_id["mr_IN"]
)
marathi_output_sentence = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(marathi_output_sentence)
#स्वच्छता हा प्रतिबंधात्मक आरोग्य सेवेतील सर्वात महत्त्वाचा पैलू आहे, असे पंतप्रधान म्हणाले.
```
#### Limitations
The model was trained on Google Colab and as the training takes a lot of time the model was trained for small time and small number of epochs.
## Eval results
WAT 2021: 16.11 |
molly-hayward/bioelectra-base-discriminator | molly-hayward | 2021-04-17T16:59:46Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | To produce BioELECTRA, we pretrain ELECTRA on a corpus of over 20 million abstracts from PubMed.
How to use the discriminator in transformers:
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("molly-hayward/bioelectra-base-discriminator")
tokenizer = ElectraTokenizerFast.from_pretrained("molly-hayward/bioelectra-base-discriminator") |
molly-hayward/bioelectra-base-generator | molly-hayward | 2021-04-17T16:59:28Z | 2 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | To produce BioELECTRA, we pretrain ELECTRA on a corpus of over 20 million abstracts from PubMed.
How to use the generator in transformers:
from transformers import ElectraForMaskedLM, ElectraTokenizerFast
import torch
generator = ElectraForMaskedLM.from_pretrained("molly-hayward/bioelectra-base-generator")
tokenizer = ElectraTokenizerFast.from_pretrained("molly-hayward/bioelectra-base-generator") |
molly-hayward/bioelectra-small-generator | molly-hayward | 2021-04-17T16:58:15Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | To produce BioELECTRA, we pretrain ELECTRA on a corpus of over 20 million abstracts from PubMed.
How to use the generator in transformers:
from transformers import ElectraForMaskedLM, ElectraTokenizerFast
import torch
generator = ElectraForMaskedLM.from_pretrained("molly-hayward/bioelectra-small-generator")
tokenizer = ElectraTokenizerFast.from_pretrained("molly-hayward/bioelectra-small-generator") |
nateraw/resnet50 | nateraw | 2021-04-15T23:19:34Z | 71 | 0 | transformers | [
"transformers",
"pytorch",
"resnet",
"image-classification",
"dataset:imagenet",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
tags:
- image-classification
- pytorch
datasets:
- imagenet
---
# Resnet50 Model from Torchvision
## Using the model
```
pip install modelz
```
```python
from modelz import ResnetModel
model = ResnetModel.from_pretrained('nateraw/resnet50')
ex_input = torch.rand(4, 3, 224, 224)
out = model(ex_input)
``` |
mudes/multilingual-large | mudes | 2021-04-15T22:36:53Z | 6 | 2 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | # MUDES - {Mu}ltilingual {De}tection of Offensive {S}pans
We provide state-of-the-art models to detect toxic spans in text. We have evaluated our models on Toxic Spans task at SemEval 2021 (Task 5).
## Usage
You can use this model when you have [MUDES](https://github.com/TharinduDR/MUDES) installed:
```bash
pip install mudes
```
Then you can use the model like this:
```python
from mudes.app.mudes_app import MUDESApp
app = MUDESApp("multilingual-large", use_cuda=False)
print(app.predict_toxic_spans("You motherfucking cunt", spans=True))
```
## System Demonstration
An experimental demonstration interface called MUDES-UI has been released on [GitHub](https://github.com/TharinduDR/MUDES-UI) and can be checked out in [here](http://rgcl.wlv.ac.uk/mudes/).
## Citing & Authors
If you find this model helpful, feel free to cite our publication
```bash
@inproceedings{ranasinghemudes,
title={{MUDES: Multilingual Detection of Offensive Spans}},
author={Tharindu Ranasinghe and Marcos Zampieri},
booktitle={Proceedings of NAACL},
year={2021}
}
```
```bash
@inproceedings{ranasinghe2021semeval,
title={{WLV-RIT at SemEval-2021 Task 5: A Neural Transformer Framework for Detecting Toxic Spans}},
author = {Ranasinghe, Tharindu and Sarkar, Diptanu and Zampieri, Marcos and Ororbia, Alex},
booktitle={Proceedings of SemEval},
year={2021}
}
``` |
soheeyang/rdr-ctx_encoder-single-nq-base | soheeyang | 2021-04-15T15:58:10Z | 2,454 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"dpr",
"arxiv:2010.10999",
"arxiv:2004.04906",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | # rdr-ctx_encoder-single-nq-base
Reader-Distilled Retriever (`RDR`)
Sohee Yang and Minjoon Seo, [Is Retriever Merely an Approximator of Reader?](https://arxiv.org/abs/2010.10999), arXiv 2020
The paper proposes to distill the reader into the retriever so that the retriever absorbs the strength of the reader while keeping its own benefit. The model is a [DPR](https://arxiv.org/abs/2004.04906) retriever further finetuned using knowledge distillation from the DPR reader. Using this approach, the answer recall rate increases by a large margin, especially at small numbers of top-k.
This model is the context encoder of RDR trained solely on Natural Questions (NQ) (single-nq). This model is trained by the authors and is the official checkpoint of RDR.
## Performance
The following is the answer recall rate measured using PyTorch 1.4.0 and transformers 4.5.0.
The values of DPR on the NQ dev set are taken from Table 1 of the [paper of RDR](https://arxiv.org/abs/2010.10999). The values of DPR on the NQ test set are taken from the [codebase of DPR](https://github.com/facebookresearch/DPR). DPR-adv is the a new DPR model released in March 2021. It is trained on the original DPR NQ train set and its version where hard negatives are mined using DPR index itself using the previous NQ checkpoint. Please refer to the [codebase of DPR](https://github.com/facebookresearch/DPR) for more details about DPR-adv-hn.
| | Top-K Passages | 1 | 5 | 20 | 50 | 100 |
|---------|------------------|-------|-------|-------|-------|-------|
| **NQ Dev** | **DPR** | 44.2 | - | 76.9 | 81.3 | 84.2 |
| | **RDR (This Model)** | **54.43** | **72.17** | **81.33** | **84.8** | **86.61** |
| **NQ Test** | **DPR** | 45.87 | 68.14 | 79.97 | - | 85.87 |
| | **DPR-adv-hn** | 52.47 | **72.24** | 81.33 | - | 87.29 |
| | **RDR (This Model)** | **54.29** | 72.16 | **82.8** | **86.34** | **88.2** |
## How to Use
RDR shares the same architecture with DPR. Therefore, It uses `DPRContextEncoder` as the model class.
Using `AutoModel` does not properly detect whether the checkpoint is for `DPRContextEncoder` or `DPRQuestionEncoder`.
Therefore, please specify the exact class to use the model.
```python
from transformers import DPRContextEncoder, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("soheeyang/rdr-ctx_encoder-single-nq-base")
ctx_encoder = DPRContextEncoder.from_pretrained("soheeyang/rdr-ctx_encoder-single-nq-base")
data = tokenizer("context comes here", return_tensors="pt")
ctx_embedding = ctx_encoder(**data).pooler_output # embedding vector for context
```
|
soheeyang/dpr-ctx_encoder-single-trivia-base | soheeyang | 2021-04-15T14:48:50Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"dpr",
"arxiv:2004.04906",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | # DPRContextEncoder for TriviaQA
## dpr-ctx_encoder-single-trivia-base
Dense Passage Retrieval (`DPR`)
Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih, [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906), EMNLP 2020.
This model is the context encoder of DPR trained solely on TriviaQA (single-trivia) using the [official implementation of DPR](https://github.com/facebookresearch/DPR).
Disclaimer: This model is not from the authors of DPR, but my reproduction. The authors did not release the DPR weights trained solely on TriviaQA. I hope this model checkpoint can be helpful for those who want to use DPR trained only on TriviaQA.
## Performance
The following is the answer recall rate measured using PyTorch 1.4.0 and transformers 4.5.0.
The values in parentheses are those reported in the paper.
| Top-K Passages | TriviaQA Dev | TriviaQA Test |
|----------------|--------------|---------------|
| 1 | 54.27 | 54.41 |
| 5 | 71.11 | 70.99 |
| 20 | 79.53 | 79.31 (79.4) |
| 50 | 82.72 | 82.99 |
| 100 | 85.07 | 84.99 (85.0) |
## How to Use
Using `AutoModel` does not properly detect whether the checkpoint is for `DPRContextEncoder` or `DPRQuestionEncoder`.
Therefore, please specify the exact class to use the model.
```python
from transformers import DPRContextEncoder, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("soheeyang/dpr-ctx_encoder-single-trivia-base")
ctx_encoder = DPRContextEncoder.from_pretrained("soheeyang/dpr-ctx_encoder-single-trivia-base")
data = tokenizer("context comes here", return_tensors="pt")
ctx_embedding = ctx_encoder(**data).pooler_output # embedding vector for context
```
|
vasilis/wav2vec2-large-xlsr-53-estonian | vasilis | 2021-04-15T09:21:31Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"et",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: et
datasets:
- common_voice
- NST Estonian ASR Database
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53 - Estonian by Vasilis
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice et
type: common_voice
args: et
metrics:
- name: Test WER
type: wer
value: 30.658320
- name: Test CER
type: cer
value: 5.261490
---
# Wav2Vec2-Large-XLSR-53-Estonian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Estonian using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "et", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-Estonian") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-Estonian") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Estonian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "et", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-Estonian")
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-Estonian")
model.to("cuda")
chars_to_ignore_regex = "[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']" # TODO: adapt this list to include all special characters you removed from the data
resampler = {
48_000: torchaudio.transforms.Resample(48_000, 16_000),
44100: torchaudio.transforms.Resample(44100, 16_000),
32000: torchaudio.transforms.Resample(32000, 16_000)
}
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler[sampling_rate](speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * wer.compute(predictions=[" ".join(list(entry)) for entry in result["pred_strings"]], references=[" ".join(list(entry)) for entry in result["sentence"]])))
```
**Test Result**: 30.658320 %
## Training
Common voice `train` and `validation` sets were used for finetuning
for 20000 steps (approx. 116 epochs). Both the `feature extractor` (`Wav2Vec2FeatureExtractor`) and
`feature projection` (`Wav2Vec2FeatureProjection`) layer were frozen. Only the `encoder` layer (`Wav2Vec2EncoderStableLayerNorm`) was finetuned.
|
skplanet/dialog-koelectra-small-discriminator | skplanet | 2021-04-13T01:15:27Z | 29 | 2 | transformers | [
"transformers",
"pytorch",
"electra",
"pretraining",
"arxiv:1406.2661",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | # Dialog-KoELECTRA
Github : [https://github.com/skplanet/Dialog-KoELECTRA](https://github.com/skplanet/Dialog-KoELECTRA)
## Introduction
**Dialog-KoELECTRA** is a language model specialized for dialogue. It was trained with 22GB colloquial and written style Korean text data. Dialog-ELECTRA model is made based on the [ELECTRA](https://openreview.net/pdf?id=r1xMH1BtvB) model. ELECTRA is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU.
<br>
## Released Models
We are initially releasing small version pre-trained model.
The model was trained on Korean text. We hope to release other models, such as base/large models, in the future.
| Model | Layers | Hidden Size | Params | Max<br/>Seq Len | Learning<br/>Rate | Batch Size | Train Steps |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| Dialog-KoELECTRA-Small | 12 | 256 | 14M | 128 | 1e-4 | 512 | 700K |
<br>
## Model Performance
Dialog-KoELECTRA shows strong performance in conversational downstream tasks.
| | **NSMC**<br/>(acc) | **Question Pair**<br/>(acc) | **Korean-Hate-Speech**<br/>(F1) | **Naver NER**<br/>(F1) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) |
| :--------------------- | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: |
| DistilKoBERT | 88.60 | 92.48 | 60.72 | 84.65 | 72.00 | 72.59 |
| **Dialog-KoELECTRA-Small** | **90.01** | **94.99** | **68.26** | **85.51** | **78.54** | **78.96** |
<br>
## Train Data
<table class="tg">
<thead>
<tr>
<th class="tg-c3ow"></th>
<th class="tg-c3ow">corpus name</th>
<th class="tg-c3ow">size</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-c3ow" rowspan="4">dialog</td>
<td class="tg-0pky"><a href="https://aihub.or.kr/aidata/85" target="_blank" rel="noopener noreferrer">Aihub Korean dialog corpus</a></td>
<td class="tg-c3ow" rowspan="4">7GB</td>
</tr>
<tr>
<td class="tg-0pky"><a href="https://corpus.korean.go.kr/" target="_blank" rel="noopener noreferrer">NIKL Spoken corpus</a></td>
</tr>
<tr>
<td class="tg-0pky"><a href="https://github.com/songys/Chatbot_data" target="_blank" rel="noopener noreferrer">Korean chatbot data</a></td>
</tr>
<tr>
<td class="tg-0pky"><a href="https://github.com/Beomi/KcBERT" target="_blank" rel="noopener noreferrer">KcBERT</a></td>
</tr>
<tr>
<td class="tg-c3ow" rowspan="2">written</td>
<td class="tg-0pky"><a href="https://corpus.korean.go.kr/" target="_blank" rel="noopener noreferrer">NIKL Newspaper corpus</a></td>
<td class="tg-c3ow" rowspan="2">15GB</td>
</tr>
<tr>
<td class="tg-0pky"><a href="https://github.com/lovit/namuwikitext" target="_blank" rel="noopener noreferrer">namuwikitext</a></td>
</tr>
</tbody>
</table>
<br>
## Vocabulary
We applied morpheme analysis using [huggingface_konlpy](https://github.com/lovit/huggingface_konlpy) when creating a vocabulary dictionary.
As a result of the experiment, it showed better performance than a vocabulary dictionary created without applying morpheme analysis.
<table>
<thead>
<tr>
<th>vocabulary size</th>
<th>unused token size</th>
<th>limit alphabet</th>
<th>min frequency</th>
</tr>
</thead>
<tbody>
<tr>
<td>40,000</td>
<td>500</td>
<td>6,000</td>
<td>3</td>
</tr>
</tbody>
</table>
<br>
|
shiwangi27/wave2vec2-large-xlsr-hindi | shiwangi27 | 2021-04-09T20:56:03Z | 9 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"xlsr-hindi",
"hi",
"dataset:openslr_hindi",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: hi
datasets:
- openslr_hindi
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- xlsr-hindi
license: apache-2.0
model-index:
- name: Fine-tuned Hindi XLSR Wav2Vec2 Large
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
datasets:
- name: Common Voice hi
type: common_voice
args: hi
- name: OpenSLR Hindi
url: https://www.openslr.org/resources/103/
metrics:
- name: Test WER
type: wer
value: 46.05
---
# Wav2Vec2-Large-XLSR-Hindi
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Hindi using OpenSLR Hindi dataset for training and Common Voice Hindi Test dataset for Evaluation. The OpenSLR Hindi data used for training was of size 10000 and it was randomly sampled. The OpenSLR train and test sets were combined and used as training data in order to increase the amount of variations. The evaluation was done on Common Voice Test set. The OpenSLR data is 8kHz and hence it was upsampled to 16kHz for training.
When using this model, make sure that your speech input is sampled at 16kHz.
*Note: This is the first iteration of the fine-tuning. Will update this model if WER improves in future experiments.*
## Test Results
| Dataset | WER |
| ------- | --- |
| Test split Common Voice Hindi | 46.055 % |
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hi", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("shiwangi27/wave2vec2-large-xlsr-hindi")
model = Wav2Vec2ForCTC.from_pretrained("shiwangi27/wave2vec2-large-xlsr-hindi")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Hindi test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "hi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("shiwangi27/wave2vec2-large-xlsr-hindi")
model = Wav2Vec2ForCTC.from_pretrained("shiwangi27/wave2vec2-large-xlsr-hindi")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\�\।\']'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
## Code
The Notebook used for training this model can be found at [shiwangi27/googlecolab](https://github.com/shiwangi27/googlecolab/blob/main/run_common_voice.ipynb).
I used a modified version of [run_common_voice.py](https://github.com/shiwangi27/googlecolab/blob/main/run_common_voice.py) for training.
|
vasilis/wav2vec2-large-xlsr-53-swedish | vasilis | 2021-04-09T12:23:23Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: sv-SE
datasets:
- common_voice
- NST Swedish ASR Database
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: V XLSR Wav2Vec2 Large 53 - Swedish
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice sv-SE
type: common_voice
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 14.695793
- name: Test CER
type: cer
value: 5.264666
---
# Wav2Vec2-Large-XLSR-53-Swedish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Swedish using the [Common Voice](https://huggingface.co/datasets/common_voice) and parts for the [NST Swedish ASR Database](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-16/).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-swedish") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-swedish") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Swedish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "sv-SE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-swedish")
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-swedish")
model.to("cuda")
chars_to_ignore_regex = "[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']" # TODO: adapt this list to include all special characters you removed from the data
resampler = {
48_000: torchaudio.transforms.Resample(48_000, 16_000),
44100: torchaudio.transforms.Resample(44100, 16_000),
32000: torchaudio.transforms.Resample(32000, 16_000)
}
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler[sampling_rate](speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * wer.compute(predictions=[" ".join(list(entry)) for entry in result["pred_strings"]], references=[" ".join(list(entry)) for entry in result["sentence"]])))
```
**Test Result**: 14.695793 %
## Training
As first step used Common Voice train dataset and parts from NST
as can be found [here](https://github.com/se-asr/nst/tree/master).
Part of NST where removed using this mask
```python
mask = [(5 < len(x.split()) < 20) and np.average([len(entry) for entry in x.split()]) > 5 for x in dataset['transcript'].tolist()]
```
After training like this for 20000 steps the model was finetuned on all of nst data using the mask
```python
mask = [(1 < len(x.split()) < 25) and np.average([len(entry) for entry in x.split()]) > 3 for x in dataset['transcript'].tolist()]
```
and all of common voice for 100000 more steps approximately 16 epochs.
|
valhalla/gpt-neo-random-tiny | valhalla | 2021-04-07T16:38:40Z | 7,210 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | **This model is uploaded for testing purpose. It's random model not trained on anything** |
MalawiUniST/ISO6392.nya.ny | MalawiUniST | 2021-04-07T14:30:00Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"longformer",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | This model trained on nyanja dataset in Longformer |
vasudevgupta/offnote-mbart-adapters-bhasha | vasudevgupta | 2021-04-07T13:53:17Z | 4 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | **Project GitHub:** https://github.com/vasudevgupta7/transformers-adapters
**Notes**
* base model can be downloaded from `facebook/mbart-large-cc25`
* `adapters-hin-eng.pt`: adapters hin-eng
* `adapters-guj-eng.pt`: adapters guj-eng
|
navteca/roberta-base-squad2 | navteca | 2021-04-06T16:27:48Z | 19 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"roberta",
"question-answering",
"en",
"dataset:squad_v2",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
datasets:
- squad_v2
language: en
license: mit
pipeline_tag: question-answering
tags:
- roberta
- question-answering
---
# Roberta base model for QA (SQuAD 2.0)
This model uses [roberta-base](https://huggingface.co/roberta-base).
## Training Data
The models have been trained on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
It can be used for question answering task.
## Usage and Performance
The trained model can be used like this:
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
# Load model & tokenizer
roberta_model = AutoModelForQuestionAnswering.from_pretrained('navteca/roberta-base-squad2')
roberta_tokenizer = AutoTokenizer.from_pretrained('navteca/roberta-base-squad2')
# Get predictions
nlp = pipeline('question-answering', model=roberta_model, tokenizer=roberta_tokenizer)
result = nlp({
'question': 'How many people live in Berlin?',
'context': 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'
})
print(result)
#{
# "answer": "3,520,031"
# "end": 36,
# "score": 0.96186668,
# "start": 27,
#}
```
|
seccily/wav2vec-lt-lite | seccily | 2021-04-06T05:40:27Z | 7 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"lt",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: lt
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Lithuanian by Seçilay KUTAL
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice lt
type: common_voice
args: lt
metrics:
- name: Test WER
type: wer
---
# wav2vec-lt-lite
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("seccily/wav2vec-lt-lite")
model = Wav2Vec2ForCTC.from_pretrained("seccily/wav2vec-lt-lite")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
```
Test Result: 59.47 |
seduerr/t5-small-pytorch | seduerr | 2021-04-06T04:48:50Z | 273 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"translation",
"en",
"fr",
"ro",
"de",
"dataset:c4",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:05Z | ---
language:
- en
- fr
- ro
- de
datasets:
- c4
tags:
- summarization
- translation
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
Other Community Checkpoints: [here](https://huggingface.co/models?search=t5)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

|
castorini/monot5-3b-msmarco | castorini | 2021-04-03T13:48:44Z | 127 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"feature-extraction",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | This model is a T5-3B reranker fine-tuned on the MS MARCO passage dataset for 100k steps (or 10 epochs).
For more details on how to use it, check [pygaggle.ai](pygaggle.ai)
Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/) |
mami/santuycuy | mami | 2021-04-02T15:17:05Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | https://zambiainc.com/advert/uptobox-sub-bg-nobody-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4/
https://zambiainc.com/advert/uptobox-sub-bg-%d1%80%d0%b0%d1%8f-%d0%b8-%d0%bf%d0%be%d1%81%d0%bb%d0%b5%d0%b4%d0%bd%d0%b8%d1%8f%d1%82-%d0%b4%d1%80%d0%b0%d0%ba%d0%be%d0%bd-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8/
https://zambiainc.com/advert/uptobox-sub-bg-chaos-walking-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-%d0%b6%d0%b5%d0%bb%d1%8f%d0%b7%d0%bd%d0%b0%d1%82%d0%b0-%d0%b7%d0%b0%d0%b2%d0%b5%d1%81%d0%b0-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb/
https://zambiainc.com/advert/uptobox-sub-bg-%d0%ba%d1%80%d1%83%d0%b4-2-2020-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3/
https://zambiainc.com/advert/uptobox-sub-bg-the-marksman-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-boogie-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4/
https://zambiainc.com/advert/uptobox-sub-bg-minari-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4/
https://zambiainc.com/advert/uptobox-sub-bg-%d0%bc%d0%be%d0%bc%d0%b8%d1%87%d0%b5-%d1%81-%d0%bf%d0%be%d1%82%d0%b5%d0%bd%d1%86%d0%b8%d0%b0%d0%bb-2020-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3/
https://zambiainc.com/advert/uptobox-sub-bg-monster-hunter-2020-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0/
https://zambiainc.com/advert/uptobox-sub-bg-%d0%b7%d0%b5%d0%bc%d1%8f-%d0%bd%d0%b0-%d0%bd%d0%be%d0%bc%d0%b0%d0%b4%d0%b8-2020-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5/
https://zambiainc.com/advert/uptobox-sub-bg-%d0%b2%d0%be%d0%b9%d0%bd%d0%b0%d1%82%d0%b0-%d1%81-%d0%b4%d1%8f%d0%b4%d0%be-2020-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5/
https://zambiainc.com/advert/uptobox-sub-bg-%d0%bd%d0%be%d0%b2%d0%b8%d0%bd%d0%b8-%d0%be%d1%82-%d1%81%d0%b2%d0%b5%d1%82%d0%b0-2020-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd/
https://zambiainc.com/advert/uptobox-sub-bg-six-minutes-to-midnight-2020-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3/
https://zambiainc.com/advert/uptobox-sub-bg-dutch-2020-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4/
https://zambiainc.com/advert/uptobox-sub-bg-lamb-of-god-the-concert-film-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1/
https://zambiainc.com/advert/uptobox-sub-bg-long-weekend-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-mystery-of-the-kingdom-of-god-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1/
https://zambiainc.com/advert/uptobox-sub-bg-%d0%b7%d0%b0%d1%82%d0%b2%d0%be%d1%80%d0%bd%d0%b8%d0%ba-760-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd/
https://zambiainc.com/advert/uptobox-sub-bg-dark-state-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-zack-snyders-justice-league-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1/
https://zambiainc.com/advert/uptobox-sub-bg-%d0%b3%d0%be%d0%b4%d0%b7%d0%b8%d0%bb%d0%b0-%d1%81%d1%80%d0%b5%d1%89%d1%83-%d0%ba%d0%be%d0%bd%d0%b3-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3/
https://zambiainc.com/advert/uptobox-sub-bg-bad-trip-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-%d1%82%d0%be%d0%bc-%d0%b8-%d0%b4%d0%b6%d0%b5%d1%80%d0%b8-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb/
https://zambiainc.com/advert/uptobox-sub-bg-skylines-2020-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-the-little-things-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0/
https://zambiainc.com/advert/uptobox-sub-bg-the-little-things-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0/
https://zambiainc.com/advert/uptobox-sub-bg-sentinelle-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-the-unholy-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-%d1%81%d0%bc%d1%8a%d1%80%d1%82%d0%be%d0%bd%d0%be%d1%81%d0%bd%d0%b0-%d0%b1%d0%b8%d1%82%d0%ba%d0%b0-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3/
https://zambiainc.com/advert/uptobox-sub-bg-assault-on-va-33-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0/
https://zambiainc.com/advert/uptobox-sub-bg-vanquish-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-voyagers-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-stowaway-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-thunder-force-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-in-search-of-tomorrow-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3/
https://zambiainc.com/advert/uptobox-sub-bg-arlo-the-alligator-boy-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3/
https://zambiainc.com/advert/uptobox-sub-bg-the-nameless-days-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0/
https://zambiainc.com/advert/uptobox-sub-bg-the-banishing-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-%d0%b1%d0%b0%d1%89%d0%b8%d0%bd%d1%81%d1%82%d0%b2%d0%be-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb/
https://zambiainc.com/advert/uptobox-sub-bg-bananza-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4/
https://zambiainc.com/advert/uptobox-sub-bg-bonhoeffer-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-held-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4/
https://zambiainc.com/advert/uptobox-sub-bg-held-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4-2/
https://zambiainc.com/advert/uptobox-sub-bg-00k9-no-time-to-shed-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3/
https://zambiainc.com/advert/uptobox-sub-bg-between-us-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-the-believer-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-limbo-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4/
https://zambiainc.com/advert/uptobox-sub-bg-things-heard-seen-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0/
https://zambiainc.com/advert/uptobox-sub-bg-free-byrd-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-the-workplace-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
|
not-tanh/wav2vec2-large-xlsr-53-vietnamese | not-tanh | 2021-04-02T10:59:16Z | 8 | 3 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"vi",
"dataset:common_voice",
"dataset:vivos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: vi
datasets:
- common_voice
- vivos
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Ted Vietnamese XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice vi
type: common_voice
args: vi
metrics:
- name: Test WER
type: wer
value: 39.571823
---
# Wav2Vec2-Large-XLSR-53-Vietnamese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Vietnamese using the [Common Voice](https://huggingface.co/datasets/common_voice), [Vivos dataset](https://ailab.hcmus.edu.vn/vivos) and [FOSD dataset](https://data.mendeley.com/datasets/k9sxg2twv4/4).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "vi", split="test")
processor = Wav2Vec2Processor.from_pretrained("not-tanh/wav2vec2-large-xlsr-53-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("not-tanh/wav2vec2-large-xlsr-53-vietnamese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Vietnamese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "vi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("not-tanh/wav2vec2-large-xlsr-53-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("not-tanh/wav2vec2-large-xlsr-53-vietnamese")
model.to("cuda")
chars_to_ignore_regex = r'[,?.!\-;:"“%\'�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 39.571823%
## Training
## TODO
The Common Voice `train`, `validation`, the VIVOS and FOSD datasets were used for training
The script used for training can be found ... # TODO |
Wikidepia/IndoConvBERT-base | Wikidepia | 2021-04-02T07:22:25Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"convbert",
"feature-extraction",
"id",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ---
inference: false
language: id
---
# IndoConvBERT Base Model
IndoConvBERT is a ConvBERT model pretrained on Indo4B.
## Pretraining details
We follow a different training procedure: instead of using a two-phase approach, that pre-trains the model for 90% with 128 sequence length and 10% with 512 sequence length, we pre-train the model with 512 sequence length for 1M steps on a v3-8 TPU.
The current version of the model is trained on Indo4B and small Twitter dump.
## Acknowledgement
Big thanks to TFRC (TensorFlow Research Cloud) for providing free TPU.
|
qqpann/w2v_hf_jsut_xlsr53 | qqpann | 2021-04-01T14:49:39Z | 20 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ja",
"dataset:common_voice",
"dataset:jsut",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: ja
datasets:
- common_voice
- jsut
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Japanese XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ja
type: common_voice
args: ja
metrics:
- name: Test WER
type: wer
value: 51.72
- name: Test CER
type: cer
value: 24.89
---
# Wav2Vec2-Large-XLSR-53-Japanese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Japanese using the [Common Voice](https://huggingface.co/datasets/common_voice), and JSUT dataset{s}.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ja", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("qqhann/w2v_hf_jsut_xlsr53")
model = Wav2Vec2ForCTC.from_pretrained("qqhann/w2v_hf_jsut_xlsr53")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Japanese test data of Common Voice.
```python
!pip install torchaudio
!pip install datasets transformers
!pip install jiwer
!pip install mecab-python3
!pip install unidic-lite
!python -m unidic download
!pip install jaconv
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
import MeCab
from jaconv import kata2hira
from typing import List
# Japanese preprocessing
tagger = MeCab.Tagger("-Owakati")
chars_to_ignore_regex = '[\。\、\「\」\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
def text2kata(text):
node = tagger.parseToNode(text)
word_class = []
while node:
word = node.surface
wclass = node.feature.split(',')
if wclass[0] != u'BOS/EOS':
if len(wclass) <= 6:
word_class.append((word))
elif wclass[6] == None:
word_class.append((word))
else:
word_class.append((wclass[6]))
node = node.next
return ' '.join(word_class)
def hiragana(text):
return kata2hira(text2kata(text))
test_dataset = load_dataset("common_voice", "ja", split="test")
wer = load_metric("wer")
resampler = torchaudio.transforms.Resample(48_000, 16_000) # JSUT is already 16kHz
# resampler = torchaudio.transforms.Resample(16_000, 16_000) # JSUT is already 16kHz
processor = Wav2Vec2Processor.from_pretrained("qqhann/w2v_hf_jsut_xlsr53")
model = Wav2Vec2ForCTC.from_pretrained("qqhann/w2v_hf_jsut_xlsr53")
model.to("cuda")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = hiragana(batch["sentence"]).strip()
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
def cer_compute(predictions: List[str], references: List[str]):
p = [" ".join(list(" " + pred.replace(" ", ""))).strip() for pred in predictions]
r = [" ".join(list(" " + ref.replace(" ", ""))).strip() for ref in references]
return wer.compute(predictions=p, references=r)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * cer_compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 51.72 %
## Training
<!-- The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training. -->
The privately collected JSUT Japanese dataset was used for training.
<!-- The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here. -->
|
lighteternal/SSE-TUC-mt-en-el-cased | lighteternal | 2021-03-31T17:27:05Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"fsmt",
"text2text-generation",
"translation",
"en",
"el",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:05Z | ---
language:
- en
- el
tags:
- translation
widget:
- text: "'Katerina', is the best name for a girl."
license: apache-2.0
metrics:
- bleu
---
## English to Greek NMT
## By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
* source languages: en
* target languages: el
* licence: apache-2.0
* dataset: Opus, CCmatrix
* model: transformer(fairseq)
* pre-processing: tokenization + BPE segmentation
* metrics: bleu, chrf
### Model description
Trained using the Fairseq framework, transformer_iwslt_de_en architecture.\\
BPE segmentation (20k codes).\\
Mixed-case model.
### How to use
```
from transformers import FSMTTokenizer, FSMTForConditionalGeneration
mname = "lighteternal/SSE-TUC-mt-en-el-cased"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
text = " 'Katerina', is the best name for a girl."
encoded = tokenizer.encode(text, return_tensors='pt')
outputs = model.generate(encoded, num_beams=5, num_return_sequences=5, early_stopping=True)
for i, output in enumerate(outputs):
i += 1
print(f"{i}: {output.tolist()}")
decoded = tokenizer.decode(output, skip_special_tokens=True)
print(f"{i}: {decoded}")
```
## Training data
Consolidated corpus from Opus and CC-Matrix (~6.6GB in total)
## Eval results
Results on Tatoeba testset (EN-EL):
| BLEU | chrF |
| ------ | ------ |
| 76.9 | 0.733 |
Results on XNLI parallel (EN-EL):
| BLEU | chrF |
| ------ | ------ |
| 65.4 | 0.624 |
### BibTeX entry and citation info
Dimitris Papadopoulos, et al. "PENELOPIE: Enabling Open Information Extraction for the Greek Language through Machine Translation." (2021). Accepted at EACL 2021 SRW
### Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
|
Wikidepia/indobert-lite-squad | Wikidepia | 2021-03-31T13:26:55Z | 132 | 6 | transformers | [
"transformers",
"pytorch",
"albert",
"question-answering",
"id",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
language: id
widget:
- text: "Kapan Einstein melepas kewarganegaraan Jerman?"
context: "Setelah menghabiskan waktu satu tahun di Praha, Einstein tinggal di Swiss antara tahun 1895 dan 1914, melepas kewarganegaraan Jermannya pada tahun 1896, dan lulus sarjana dari sekolah politeknik federal Swiss (kelak Eidgenössische Technische Hochschule, ETH) di Zürich pada tahun 1900."
---
# IndoBERT-Lite base fine-tuned on Translated SQuAD v2
[IndoBERT-Lite](https://huggingface.co/indobenchmark/indobert-lite-base-p2) trained by [Indo Benchmark](https://www.indobenchmark.com/) and fine-tuned on [Translated SQuAD 2.0](https://github.com/Wikidepia/indonesia_dataset/tree/master/question-answering/SQuAD) for **Q&A** downstream task.
## Model in action
Fast usage with **pipelines**:
```python
from transformers import BertTokenizerFast, pipeline
tokenizer = BertTokenizerFast.from_pretrained(
'Wikidepia/indobert-lite-squad'
)
qa_pipeline = pipeline(
"question-answering",
model="Wikidepia/indobert-lite-squad",
tokenizer=tokenizer
)
qa_pipeline({
'context': "Setelah menghabiskan waktu satu tahun di Praha, Einstein tinggal di Swiss antara tahun 1895 dan 1914, melepas kewarganegaraan Jermannya pada tahun 1896, dan lulus sarjana dari sekolah politeknik federal Swiss (kelak Eidgenössische Technische Hochschule, ETH) di Zürich pada tahun 1900.",
'question': "Kapan Einstein melepas kewarganegaraan Jerman?"
})
```
# Output:
```json
{
"score":0.9799205660820007,
"start":147,
"end":151,
"answer":"1896"
}
```
README copied from [mrm8488's repository](https://huggingface.co/mrm8488/bert-tiny-finetuned-squadv2)
|
skylord/wav2vec2-large-xlsr-greek-2 | skylord | 2021-03-31T09:42:31Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"el",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: el
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Greek XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice el
type: common_voice
args: el
metrics:
- name: Test WER
type: wer
value: 45.048955
---
# Wav2Vec2-Large-XLSR-53-Greek
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Greek using the [Common Voice](https://huggingface.co/datasets/common_voice),
The Greek CV data has a majority of male voices. To balance it synthesised female voices were created using the approach discussed here [slack](https://huggingface.slack.com/archives/C01QZ90Q83Z/p1616741140114800)
The text from the common-voice dataset was used to synthesize vocies of female speakers using [Googe's TTS Standard Voice model](https://cloud.google.com/text-to-speech)
Fine-tuned on facebook/wav2vec2-large-xlsr-53 using Greek CommonVoice :: 5 epochs >> 56.25% WER
Resuming from checkpoints trained for another 15 epochs >> 34.00%
Added synthesised female voices trained for 12 epochs >> 34.00% (no change)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "el", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("skylord/greek_lsr_1")
model = Wav2Vec2ForCTC.from_pretrained("skylord/greek_lsr_1")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Greek test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "el", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("skylord/greek_lsr_1")
model = Wav2Vec2ForCTC.from_pretrained("skylord/greek_lsr_1")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 45.048955 %
## Training
The Common Voice `train`, `validation`, datasets were used for training as well as
The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
|
xsway/wav2vec2-large-xlsr-georgian | xsway | 2021-03-29T21:07:53Z | 2,540 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ka",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: ka
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec finetuned for Georgian
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ka
type: common_voice
args: ka
metrics:
- name: Test WER
type: wer
value: 45.28
---
# Wav2Vec2-Large-XLSR-53-Georgian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Georgian using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import librosa
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ka", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("xsway/wav2vec2-large-xlsr-georgian")
model = Wav2Vec2ForCTC.from_pretrained("xsway/wav2vec2-large-xlsr-georgian")
resampler = lambda sampling_rate, y: librosa.resample(y.numpy().squeeze(), sampling_rate, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\tbatch["speech"] = resampler(sampling_rate, speech_array).squeeze()
\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Georgian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
import librosa
test_dataset = load_dataset("common_voice", "ka", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("xsway/wav2vec2-large-xlsr-georgian")
model = Wav2Vec2ForCTC.from_pretrained("xsway/wav2vec2-large-xlsr-georgian")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“]'
resampler = lambda sampling_rate, y: librosa.resample(y.numpy().squeeze(), sampling_rate, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 45.28 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](...)
|
qqpann/wav2vec2-large-xlsr-japanese-0325-1200 | qqpann | 2021-03-29T10:26:40Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ja",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: ja
datasets:
- common_voice
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Japanese XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ja
type: common_voice
args: ja
metrics:
- name: Test WER
type: wer
value: { wer_result_on_test } #TODO (IMPORTANT): replace {wer_result_on_test} with the WER error rate you achieved on the common_voice test set. It should be in the format XX.XX (don't add the % sign here). **Please** remember to fill out this value after you evaluated your model, so that your model appears on the leaderboard. If you fill out this model card before evaluating your model, please remember to edit the model card afterward to fill in your value
---
# Wav2Vec2-Large-XLSR-53-{language} #TODO: replace language with your {language}, _e.g._ French
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on {language} using the [Common Voice](https://huggingface.co/datasets/common_voice), ... and ... dataset{s}. #TODO: replace {language} with your language, _e.g._ French and eventually add more datasets that were used and eventually remove common voice if model was not trained on common voice
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ja", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200")
model = Wav2Vec2ForCTC.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice. # TODO: replace #TODO: replace language with your {language}, _e.g._ French
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ja", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200")
model = Wav2Vec2ForCTC.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' # TODO: adapt this list to include all special characters you removed from the data
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: XX.XX %
<!-- # TODO: write output of print here. IMPORTANT: Please remember to also replace {wer_result_on_test} at the top of with this value here. tags. -->
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ...
<!-- # TODO: adapt to state all the datasets that were used for training. -->
The script used for training can be found [here](...)
<!-- # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here. -->
|
londogard/flair-swe-ner | londogard | 2021-03-29T08:06:38Z | 13 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"sv",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: sv
datasets:
- SUC 3.0
widget:
- text: "Hampus bor i Skåne och har levererat denna model idag."
---
Published with ❤️ from [londogard](https://londogard.com).
## Swedish NER in Flair (SUC 3.0)
F1-Score: **85.6** (SUC 3.0)
Predicts 8 tags:
|**Tag**|**Meaning**|
|---|---|
| PRS| person name |
| ORG | organisation name|
| TME | time unit |
| WRK | building name |
| LOC | location name |
| EVN | event name |
| MSR | measurement unit |
| OBJ | object (like "Rolls-Royce" is a object in the form of a special car) |
Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF.
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("londogard/flair-swe-ner")
# make example sentence
sentence = Sentence("Hampus bor i Skåne och har levererat denna model idag.")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [0]: "Hampus" [− Labels: PRS (1.0)]
Span [3]: "Skåne" [− Labels: LOC (1.0)]
Span [9]: "idag" [− Labels: TME(1.0)]
```
So, the entities "_Hampus_" (labeled as a **PRS**), "_Skåne_" (labeled as a **LOC**), "_idag_" (labeled as a **TME**) are found in the sentence "_Hampus bor i Skåne och har levererat denna model idag._".
---
**Please mention londogard if using this models.** |
othrif/wav2vec_test | othrif | 2021-03-29T02:48:07Z | 22 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"ar",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: ar
datasets:
- https://arabicspeech.org/
tags:
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Egyptian by Zaid Alyafeai and Othmane Rifki
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: arabicspeech.org MGB-3
type: arabicspeech.org MGB-3
args: ar
metrics:
- name: Test WER
type: wer
value: 55.2
---
# Test Wav2Vec2 with egyptian arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Egyptian using the [arabicspeech.org MGB-3](https://arabicspeech.org/mgb3-asr/)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
dataset = load_dataset("arabic_speech_corpus", split="test")
processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec_test")
model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec_test")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
``` |
othrif/wav2vec2-large-xlsr-egyptian | othrif | 2021-03-29T02:46:30Z | 69 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"arz",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: arz
datasets:
- https://arabicspeech.org/
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Egyptian Arabic by Othmane Rifki
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: arabicspeech.org MGB-3
type: arabicspeech.org MGB-3
args: ar
metrics:
- name: Test WER
type: wer
value: 55.2
---
# Wav2Vec2-Large-XLSR-53-Egyptian-Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Egyptian using the [arabicspeech.org MGB-3](https://arabicspeech.org/mgb3-asr/)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ar", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec2-large-xlsr-egyptian")
model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec2-large-xlsr-egyptian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ar", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec2-large-xlsr-egyptian")
model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec2-large-xlsr-egyptian")
model.to("cuda")
chars_to_ignore_regex = '[\؛\—\_get\«\»\ـ\ـ\,\?\.\!\-\;\:\"\“\%\‘\”\�\#\،\☭,\؟]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 55.2
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://github.com/othrif/xlsr-wav2vec2) |
vasilis/wav2vec2-large-xlsr-53-finnish | vasilis | 2021-03-29T02:30:18Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"fi",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: fi
datasets:
- common_voice
- CSS10 finnish: Single Speaker Speech Dataset
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: V XLSR Wav2Vec2 Large 53 - finnish
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fi
type: common_voice
args: fi
metrics:
- name: Test WER
type: wer
value: 38.335242
- name: Test CER
type: cer
value: 6.552408
---
# Wav2Vec2-Large-XLSR-53-finnish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on finnish using the [Common Voice](https://huggingface.co/datasets/common_voice) and [CSS10 finnish: Single Speaker Speech Dataset](https://www.kaggle.com/bryanpark/finnish-single-speaker-speech-dataset).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "el", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the finnish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fi", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish")
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish")
model.to("cuda")
chars_to_ignore_regex = "[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']" # TODO: adapt this list to include all special characters you removed from the data
replacements = {"…": "", "–": ''}
resampler = {
48_000: torchaudio.transforms.Resample(48_000, 16_000),
44100: torchaudio.transforms.Resample(44100, 16_000),
32000: torchaudio.transforms.Resample(32000, 16_000)
}
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
for key, value in replacements.items():
batch["sentence"] = batch["sentence"].replace(key, value)
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler[sampling_rate](speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * wer.compute(predictions=[" ".join(list(entry)) for entry in result["pred_strings"]], references=[" ".join(list(entry)) for entry in result["sentence"]])))
```
**Test Result**: 38.335242 %
## Training
The Common Voice train dataset was used for training. Also all of `CSS10 Finnish` was used using the normalized transcripts.
After 20000 steps the models was finetuned using the common voice train and validation sets for 2000 steps more.
|
wietsedv/wav2vec2-large-xlsr-53-frisian | wietsedv | 2021-03-28T20:09:35Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: fy-NL
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Frisian XLSR Wav2Vec2 Large 53 by Wietse de Vries
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fy-NL
type: common_voice
args: fy-NL
metrics:
- name: Test WER
type: wer
value: 16.25
---
# Wav2Vec2-Large-XLSR-53-Frisian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Frisian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fy-NL", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-frisian")
model = Wav2Vec2ForCTC.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-frisian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Frisian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fy-NL", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-frisian")
model = Wav2Vec2ForCTC.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-frisian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\'\“\%\‘\”]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:.2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 16.25 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
skylord/wav2vec2-large-xlsr-greek-1 | skylord | 2021-03-26T13:43:40Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"el",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: el
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Greek XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice el
type: common_voice
args: el
metrics:
- name: Test WER
type: wer
value: 34.006258
---
# Wav2Vec2-Large-XLSR-53-Greek
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Greek using the [Common Voice](https://huggingface.co/datasets/common_voice), ... and ... dataset{s}. #TODO: replace {language} with your language, *e.g.* French and eventually add more datasets that were used and eventually remove common voice if model was not trained on common voice
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "el", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("skylord/greek_lsr_1")
model = Wav2Vec2ForCTC.from_pretrained("skylord/greek_lsr_1")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Greek test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "el", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("skylord/greek_lsr_1")
model = Wav2Vec2ForCTC.from_pretrained("skylord/greek_lsr_1")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 34.006258 %
## Training
The Common Voice `train`, `validation`, datasets were used for training as well as
The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
|
trueto/medalbert-base-chinese | trueto | 2021-03-26T05:29:51Z | 2 | 4 | transformers | [
"transformers",
"pytorch",
"albert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | # [medbert](https://github.com/trueto/medbert)
本项目开源硕士毕业论文“BERT模型在中文临床自然语言处理中的应用探索与研究”相关模型
## 评估基准
构建了中文电子病历命名实体识别数据集(CEMRNER)、中文医学文本命名实体识别数据集(CMTNER)、
中文医学问句-问句识别数据集(CMedQQ)和中文临床文本分类数据集(CCTC)。
| **数据集** | **训练集** | **验证集** | **测试集** | **任务类型** | **语料来源** |
| ---- | ---- | ---- |---- |---- |:----:|
| CEMRNER | 965 | 138 | 276 | 命名实体识别 | 医渡云 |
| CMTNER | 14000 | 2000 | 4000 | 命名实体识别 | CHIP2020 |
| CMedQQ | 14000 | 2000 | 4000 | 句对识别 | 平安医疗 |
| CCTC | 26837 | 3834 | 7669 | 句子分类 | CHIP2019 |
## 开源模型
在6.5亿字符中文临床自然语言文本语料上基于BERT模型和Albert模型预训练获得了MedBERT和MedAlbert模型。
## 性能表现
在同等实验环境,相同训练参数和脚本下,各模型的性能表现
| **模型** | **CEMRNER** | **CMTNER** | **CMedQQ** | **CCTC** |
| :---- | :----: | :----: | :----: | :----: |
| [BERT](https://huggingface.co/bert-base-chinese) | 81.17% | 65.67% | 87.77% | 81.62% |
| [MC-BERT](https://github.com/alibaba-research/ChineseBLUE) | 80.93% | 66.15% | 89.04% | 80.65% |
| [PCL-BERT](https://code.ihub.org.cn/projects/1775) | 81.58% | 67.02% | 88.81% | 80.27% |
| MedBERT | 82.29% | 66.49% | 88.32% | **81.77%** |
|MedBERT-wwm| **82.60%** | 67.11% | 88.02% | 81.72% |
|MedBERT-kd | 82.58% | **67.27%** | **89.34%** | 80.73% |
|- | - | - | - | - |
| [Albert](https://huggingface.co/voidful/albert_chinese_base) | 79.98% | 62.42% | 86.81% | 79.83% |
| MedAlbert | 81.03% | 63.81% | 87.56% | 80.05% |
|MedAlbert-wwm| **81.28%** | **64.12%** | **87.71%** | **80.46%** |
## 引用格式
```
杨飞洪,王序文,李姣.BERT模型在中文临床自然语言处理中的应用探索与研究[EB/OL].https://github.com/trueto/medbert, 2021-03.
``` |
navteca/quora-roberta-base | navteca | 2021-03-25T16:10:08Z | 4,293 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"en",
"dataset:quora",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
datasets:
- quora
language: en
license: mit
pipeline_tag: text-classification
tags:
- roberta
- text-classification
---
# Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
This model uses [roberta-base](https://huggingface.co/roberta-base).
## Training Data
This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset.
The model will predict a score between 0 and 1: How likely the two given questions are duplicates.
Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates.
## Usage and Performance
The trained model can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')])
print(scores)
```
|
dhpollack/distilbert-dummy-sentiment | dhpollack | 2021-03-23T17:40:32Z | 1,287 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"sentiment-analysis",
"testing",
"unit tests",
"multilingual",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- "multilingual"
- "en"
tags:
- "sentiment-analysis"
- "testing"
- "unit tests"
---
# DistilBert Dummy Sentiment Model
## Purpose
This is a dummy model that can be used for testing the transformers `pipeline` with the task `sentiment-analysis`. It should always give random results (i.e. `{"label": "negative", "score": 0.5}`).
## How to use
```python
classifier = pipeline("sentiment-analysis", "dhpollack/distilbert-dummy-sentiment")
results = classifier(["this is a test", "another test"])
```
## Notes
This was created as follows:
1. Create a vocab.txt file (in /tmp/vocab.txt in this example).
```
[UNK]
[SEP]
[PAD]
[CLS]
[MASK]
```
2. Open a python shell:
```python
import transformers
config = transformers.DistilBertConfig(vocab_size=5, n_layers=1, n_heads=1, dim=1, hidden_dim=4 * 1, num_labels=2, id2label={0: "negative", 1: "positive"}, label2id={"negative": 0, "positive": 1})
model = transformers.DistilBertForSequenceClassification(config)
tokenizer = transformers.DistilBertTokenizer("/tmp/vocab.txt", model_max_length=512)
config.save_pretrained(".")
model.save_pretrained(".")
tokenizer.save_pretrained(".")
```
|
sakares/wav2vec2-large-xlsr-thai-demo | sakares | 2021-03-22T07:15:18Z | 805 | 4 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"th",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: th
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large Thai by Sakares
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice th
type: common_voice
args: th
metrics:
- name: Test WER
type: wer
value: 44.46
---
# Wav2Vec2-Large-XLSR-53-Thai
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Thai using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from pythainlp.tokenize import word_tokenize
test_dataset = load_dataset("common_voice", "th", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("sakares/wav2vec2-large-xlsr-thai-demo")
model = Wav2Vec2ForCTC.from_pretrained("sakares/wav2vec2-large-xlsr-thai-demo")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
## For Thai NLP Library, please feel free to check https://pythainlp.github.io/docs/2.2/api/tokenize.html
def th_tokenize(batch):
batch["sentence"] = " ".join(word_tokenize(batch["sentence"], engine="newmm"))
return batch
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn).map(th_tokenize)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
Usage script [here](https://colab.research.google.com/drive/1w0VywsBtjrO2pHHPmiPugYI9yeF8nUKg?usp=sharing)
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from pythainlp.tokenize import word_tokenize
import re
test_dataset = load_dataset("common_voice", "th", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("sakares/wav2vec2-large-xlsr-thai-demo")
model = Wav2Vec2ForCTC.from_pretrained("sakares/wav2vec2-large-xlsr-thai-demo")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
## For Thai NLP Library, please feel free to check https://pythainlp.github.io/docs/2.2/api/tokenize.html
def th_tokenize(batch):
batch["sentence"] = " ".join(word_tokenize(batch["sentence"], engine="newmm"))
return batch
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn).map(th_tokenize)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 44.46 %
Evaluate script [here](https://colab.research.google.com/drive/1WZGtHKWXBztRsuXHIdebf6uoAsp7rTnK?usp=sharing)
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://colab.research.google.com/drive/18oUbeZgBGSkz16zC_WOa154QZOdmvjyt?usp=sharing)
|
HooshvareLab/distilbert-fa-zwnj-base-ner | HooshvareLab | 2021-03-21T14:32:29Z | 130 | 4 | transformers | [
"transformers",
"pytorch",
"tf",
"distilbert",
"token-classification",
"fa",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language: fa
---
# DistilbertNER
This model fine-tuned for the Named Entity Recognition (NER) task on a mixed NER dataset collected from [ARMAN](https://github.com/HaniehP/PersianNER), [PEYMA](http://nsurl.org/2019-2/tasks/task-7-named-entity-recognition-ner-for-farsi/), and [WikiANN](https://elisa-ie.github.io/wikiann/) that covered ten types of entities:
- Date (DAT)
- Event (EVE)
- Facility (FAC)
- Location (LOC)
- Money (MON)
- Organization (ORG)
- Percent (PCT)
- Person (PER)
- Product (PRO)
- Time (TIM)
## Dataset Information
| | Records | B-DAT | B-EVE | B-FAC | B-LOC | B-MON | B-ORG | B-PCT | B-PER | B-PRO | B-TIM | I-DAT | I-EVE | I-FAC | I-LOC | I-MON | I-ORG | I-PCT | I-PER | I-PRO | I-TIM |
|:------|----------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|
| Train | 29133 | 1423 | 1487 | 1400 | 13919 | 417 | 15926 | 355 | 12347 | 1855 | 150 | 1947 | 5018 | 2421 | 4118 | 1059 | 19579 | 573 | 7699 | 1914 | 332 |
| Valid | 5142 | 267 | 253 | 250 | 2362 | 100 | 2651 | 64 | 2173 | 317 | 19 | 373 | 799 | 387 | 717 | 270 | 3260 | 101 | 1382 | 303 | 35 |
| Test | 6049 | 407 | 256 | 248 | 2886 | 98 | 3216 | 94 | 2646 | 318 | 43 | 568 | 888 | 408 | 858 | 263 | 3967 | 141 | 1707 | 296 | 78 |
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
**Overall**
| Model | accuracy | precision | recall | f1 |
|:----------:|:--------:|:---------:|:--------:|:--------:|
| Distilbert | 0.994534 | 0.946326 | 0.95504 | 0.950663 |
**Per entities**
| | number | precision | recall | f1 |
|:---: |:------: |:---------: |:--------: |:--------: |
| DAT | 407 | 0.812048 | 0.828010 | 0.819951 |
| EVE | 256 | 0.955056 | 0.996094 | 0.975143 |
| FAC | 248 | 0.972549 | 1.000000 | 0.986083 |
| LOC | 2884 | 0.968403 | 0.967060 | 0.967731 |
| MON | 98 | 0.925532 | 0.887755 | 0.906250 |
| ORG | 3216 | 0.932095 | 0.951803 | 0.941846 |
| PCT | 94 | 0.936842 | 0.946809 | 0.941799 |
| PER | 2645 | 0.959818 | 0.957278 | 0.958546 |
| PRO | 318 | 0.963526 | 0.996855 | 0.979907 |
| TIM | 43 | 0.760870 | 0.813953 | 0.786517 |
## How To Use
You use this model with Transformers pipeline for NER.
### Installing requirements
```bash
pip install transformers
```
### How to predict using pipeline
```python
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification # for pytorch
from transformers import TFAutoModelForTokenClassification # for tensorflow
from transformers import pipeline
model_name_or_path = "HooshvareLab/distilbert-fa-zwnj-base-ner"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch
# model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "در سال ۲۰۱۳ درگذشت و آندرتیکر و کین برای او مراسم یادبود گرفتند."
ner_results = nlp(example)
print(ner_results)
```
## Questions?
Post a Github issue on the [ParsNER Issues](https://github.com/hooshvare/parsner/issues) repo. |
HooshvareLab/albert-fa-zwnj-base-v2-ner | HooshvareLab | 2021-03-21T14:25:09Z | 64 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"albert",
"token-classification",
"fa",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language: fa
---
# AlbertNER
This model fine-tuned for the Named Entity Recognition (NER) task on a mixed NER dataset collected from [ARMAN](https://github.com/HaniehP/PersianNER), [PEYMA](http://nsurl.org/2019-2/tasks/task-7-named-entity-recognition-ner-for-farsi/), and [WikiANN](https://elisa-ie.github.io/wikiann/) that covered ten types of entities:
- Date (DAT)
- Event (EVE)
- Facility (FAC)
- Location (LOC)
- Money (MON)
- Organization (ORG)
- Percent (PCT)
- Person (PER)
- Product (PRO)
- Time (TIM)
## Dataset Information
| | Records | B-DAT | B-EVE | B-FAC | B-LOC | B-MON | B-ORG | B-PCT | B-PER | B-PRO | B-TIM | I-DAT | I-EVE | I-FAC | I-LOC | I-MON | I-ORG | I-PCT | I-PER | I-PRO | I-TIM |
|:------|----------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|
| Train | 29133 | 1423 | 1487 | 1400 | 13919 | 417 | 15926 | 355 | 12347 | 1855 | 150 | 1947 | 5018 | 2421 | 4118 | 1059 | 19579 | 573 | 7699 | 1914 | 332 |
| Valid | 5142 | 267 | 253 | 250 | 2362 | 100 | 2651 | 64 | 2173 | 317 | 19 | 373 | 799 | 387 | 717 | 270 | 3260 | 101 | 1382 | 303 | 35 |
| Test | 6049 | 407 | 256 | 248 | 2886 | 98 | 3216 | 94 | 2646 | 318 | 43 | 568 | 888 | 408 | 858 | 263 | 3967 | 141 | 1707 | 296 | 78 |
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
**Overall**
| Model | accuracy | precision | recall | f1 |
|:----------:|:--------:|:---------:|:--------:|:--------:|
| Albert | 0.993405 | 0.938907 | 0.943966 | 0.941429 |
**Per entities**
| | number | precision | recall | f1 |
|:---: |:------: |:---------: |:--------: |:--------: |
| DAT | 407 | 0.820639 | 0.820639 | 0.820639 |
| EVE | 256 | 0.936803 | 0.984375 | 0.960000 |
| FAC | 248 | 0.925373 | 1.000000 | 0.961240 |
| LOC | 2884 | 0.960818 | 0.960818 | 0.960818 |
| MON | 98 | 0.913978 | 0.867347 | 0.890052 |
| ORG | 3216 | 0.920892 | 0.937500 | 0.929122 |
| PCT | 94 | 0.946809 | 0.946809 | 0.946809 |
| PER | 2644 | 0.960000 | 0.944024 | 0.951945 |
| PRO | 318 | 0.942943 | 0.987421 | 0.964670 |
| TIM | 43 | 0.780488 | 0.744186 | 0.761905 |
## How To Use
You use this model with Transformers pipeline for NER.
### Installing requirements
```bash
pip install sentencepiece
pip install transformers
```
### How to predict using pipeline
```python
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification # for pytorch
from transformers import TFAutoModelForTokenClassification # for tensorflow
from transformers import pipeline
model_name_or_path = "HooshvareLab/albert-fa-zwnj-base-v2-ner" # Albert
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch
# model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "در سال ۲۰۱۳ درگذشت و آندرتیکر و کین برای او مراسم یادبود گرفتند."
ner_results = nlp(example)
print(ner_results)
```
## Questions?
Post a Github issue on the [ParsNER Issues](https://github.com/hooshvare/parsner/issues) repo. |
jpcorb20/pegasus-large-reddit_tifu-samsum-256 | jpcorb20 | 2021-03-20T15:14:53Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"google/pegasus-reddit_tifu",
"summarization",
"samsum",
"en",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-03-02T23:29:05Z | ---
language:
- en
thumbnail:
tags:
- pytorch
- google/pegasus-reddit_tifu
- summarization
- samsum
license:
datasets:
- samsum
metrics:
- rouge
---
# Samsum Pegasus (Reddit/TIFU) for conversational summaries
## Model description
Pegasus (Reddit/TIFU) for conversational summaries trained on the samsum dataset!
## Training data
The data is the [samsum](https://huggingface.co/datasets/samsum) dataset for conversional summaries.
The initial weigths were from the [google/pegasus-reddit_tifu](https://huggingface.co/google/pegasus-reddit_tifu). The hypothesis being that it would help the convergence on the samsum dataset to have weights trained on a larger summarization dataset first like the Reddit TIFU using casual language.
## Training procedure
Used the _example/seq2seq/run_summarization.py_ script from the transformers source _4.5.0dev0_.
n_epochs: 3,\
batch_size: 8, \
max_source_length: 256,\
max_target_length: 128
## Eval results
eval_gen_len: 35.9939,\
eval_loss: 1.4284523725509644,\
eval_rouge1: 46.5613,\
eval_rouge2: 23.6137,\
eval_rougeL: 37.2397,\
eval_rougeLsum: 42.7126,\
eval_samples_per_second: 4.302
## Example
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
model_name = "jpcorb20/pegasus-large-reddit_tifu-samsum-256"
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name)
src_text = """Carter: Hey Alexis, I just wanted to let you know that I had a really nice time with you tonight.\r\nAlexis: Thanks Carter. Yeah, I really enjoyed myself as well.\r\nCarter: If you are up for it, I would really like to see you again soon.\r\nAlexis: Thanks Carter, I'm flattered. But I have a really busy week coming up.\r\nCarter: Yeah, no worries. I totally understand. But if you ever want to go grab dinner again, just let me know.\r\nAlexis: Yeah of course. Thanks again for tonight. Carter: Sure. Have a great night.\r\n"""
token_params = dict(max_length=256, truncation=True, padding='longest', return_tensors="pt")
batch = tokenizer(src_text, **token_params)
translated = model.generate(**batch)
decode_params = dict(num_beams=5, min_length=16, max_length=128, length_penalty=2)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True, **decode_params)
print(tgt_text) |
sebastian-hofstaetter/distilbert-dot-margin_mse-T2-msmarco | sebastian-hofstaetter | 2021-03-16T17:03:58Z | 42 | 2 | transformers | [
"transformers",
"pytorch",
"distilbert",
"feature-extraction",
"dpr",
"dense-passage-retrieval",
"knowledge-distillation",
"en",
"dataset:ms_marco",
"arxiv:2010.02666",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ---
language: "en"
tags:
- dpr
- dense-passage-retrieval
- knowledge-distillation
datasets:
- ms_marco
---
# Margin-MSE Trained DistilBert for Dense Passage Retrieval
We provide a retrieval trained DistilBert-based model (we call the architecture BERT_Dot). Our model is trained with Margin-MSE using a 3 teacher BERT_Cat (concatenated BERT scoring) ensemble on MSMARCO-Passage.
This instance can be used to **re-rank a candidate set** or **directly for a vector index based dense retrieval**. The architecture is a 6-layer DistilBERT, without architecture additions or modifications (we only change the weights during training) - to receive a query/passage representation we pool the CLS vector. We use the same BERT layers for both query and passage encoding (yields better results, and lowers memory requirements).
If you want to know more about our simple, yet effective knowledge distillation method for efficient information retrieval models for a variety of student architectures that is used for this model instance check out our paper: https://arxiv.org/abs/2010.02666 🎉
For more information, training data, source code, and a minimal usage example please visit: https://github.com/sebastian-hofstaetter/neural-ranking-kd
## Effectiveness on MSMARCO Passage & TREC-DL'19
We trained our model on the MSMARCO standard ("small"-400K query) training triples with knowledge distillation with a batch size of 32 on a single consumer-grade GPU (11GB memory).
For re-ranking we used the top-1000 BM25 results.
### MSMARCO-DEV
| | MRR@10 | NDCG@10 | Recall@1K |
|----------------------------------|--------|---------|-----------------------------|
| BM25 | .194 | .241 | .868 |
| **Margin-MSE BERT_Dot** (Re-ranking) | .332 | .391 | .868 (from BM25 candidates) |
| **Margin-MSE BERT_Dot** (Retrieval) | .323 | .381 | .957 |
### TREC-DL'19
For MRR and Recall we use the recommended binarization point of the graded relevance of 2. This might skew the results when compared to other binarization point numbers.
| | MRR@10 | NDCG@10 | Recall@1K |
|----------------------------------|--------|---------|-----------------------------|
| BM25 | .689 | .501 | .739 |
| **Margin-MSE BERT_Dot** (Re-ranking) | .862 | .712 | .739 (from BM25 candidates) |
| **Margin-MSE BERT_Dot** (Retrieval) | .868 | .697 | .769 |
For more baselines, info and analysis, please see the paper: https://arxiv.org/abs/2010.02666
## Limitations & Bias
- The model inherits social biases from both DistilBERT and MSMARCO.
- The model is only trained on relatively short passages of MSMARCO (avg. 60 words length), so it might struggle with longer text.
## Citation
If you use our model checkpoint please cite our work as:
```
@misc{hofstaetter2020_crossarchitecture_kd,
title={Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation},
author={Sebastian Hofst{\"a}tter and Sophia Althammer and Michael Schr{\"o}der and Mete Sertkan and Allan Hanbury},
year={2020},
eprint={2010.02666},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
``` |
HooshvareLab/distilbert-fa-zwnj-base | HooshvareLab | 2021-03-16T16:30:29Z | 322 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"distilbert",
"fill-mask",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language: fa
license: apache-2.0
---
# DistilBERT
This model can tackle the zero-width non-joiner character for Persian writing. Also, the model was trained on new multi-types corpora with a new set of vocabulary.
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo. |
adzcodez/TokenClassificationTest | adzcodez | 2021-03-16T14:18:09Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | distilbert-base-uncased finetuned on the conll2003 dataset for NER. |
patrickvonplaten/wav2vec2-base-timit-demo | patrickvonplaten | 2021-03-12T15:12:49Z | 655 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ## Wav2Vec2 Fine-Tuned on English dataset Timit
The model was fine-tuned in a google colab for demonstration purposes.
Please refer to [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information about the model. |
gagan3012/keytotext | gagan3012 | 2021-03-11T20:23:32Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | # keytotext
Idea is to build a model which will take keywords as inputs and generate sentences as outputs.
### Model:
Two Models have been built:
- Using T5-base size = 850 MB can be found here: https://huggingface.co/gagan3012/keytotext
- Using T5-small size = 230 MB can be found here: https://huggingface.co/gagan3012/keytotext-small
#### Usage:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gagan3012/keytotext-small")
model = AutoModelWithLMHead.from_pretrained("gagan3012/keytotext-small")
```
### Demo:
[](https://share.streamlit.io/gagan3012/keytotext/app.py)
https://share.streamlit.io/gagan3012/keytotext/app.py

### Example:
['India', 'Wedding'] -> We are celebrating today in New Delhi with three wedding anniversary parties.
|
navteca/quora-roberta-large | navteca | 2021-03-10T14:57:04Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"en",
"dataset:quora",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
datasets:
- quora
language: en
license: mit
pipeline_tag: text-classification
tags:
- roberta
- text-classification
---
# Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
This model uses [roberta-large](https://huggingface.co/roberta-large).
## Training Data
This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset.
The model will predict a score between 0 and 1: How likely the two given questions are duplicates.
Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates.
## Usage and Performance
The trained model can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')])
print(scores)
```
|
yjernite/bart_eli5 | yjernite | 2021-03-09T22:31:11Z | 359 | 11 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:eli5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language: en
license: apache-2.0
datasets:
- eli5
---
## BART ELI5
Read the article at https://yjernite.github.io/lfqa.html and try the demo at https://huggingface.co/qa/
|
wptoux/albert-chinese-large-qa | wptoux | 2021-03-09T07:48:40Z | 65 | 12 | transformers | [
"transformers",
"pytorch",
"albert",
"question-answering",
"Question Answering",
"zh",
"dataset:webqa",
"dataset:dureader",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
language:
- zh
tags:
- Question Answering
license: apache-2.0
datasets:
- webqa
- dureader
---
# albert-chinese-large-qa
Albert large QA model pretrained from baidu webqa and baidu dureader datasets.
## Data source
+ baidu webqa 1.0
+ baidu dureader
## Traing Method
We combined the two datasets together and created a new dataset in squad format, including 705139 samples for training and 69638 samples for validation.
We finetune the model based on the albert chinese large model.
## Hyperparams
+ learning_rate 1e-5
+ max_seq_length 512
+ max_query_length 50
+ max_answer_length 300
+ doc_stride 256
+ num_train_epochs 2
+ warmup_steps 1000
+ per_gpu_train_batch_size 8
+ gradient_accumulation_steps 3
+ n_gpu 2 (Nvidia Tesla P100)
## Usage
```
from transformers import AutoModelForQuestionAnswering, BertTokenizer
model = AutoModelForQuestionAnswering.from_pretrained('wptoux/albert-chinese-large-qa')
tokenizer = BertTokenizer.from_pretrained('wptoux/albert-chinese-large-qa')
```
***Important: use BertTokenizer***
## MoreInfo
Please visit https://github.com/wptoux/albert-chinese-large-webqa for details.
|
Jade/bert_base_law | Jade | 2021-03-08T06:59:50Z | 0 | 0 | null | [
"NLP",
"LAW",
"dataset:WIP",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
language: "zh_CN"
thumbnail: "url to a thumbnail used in social sharing"
tags:
- NLP
- LAW
license: "MIT"
datasets:
- WIP
metrics:
- WIP
--- |
Darkrider/covidbert_mednli | Darkrider | 2021-03-07T15:20:12Z | 4 | 0 | transformers | [
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z | # CovidBERT-MedNLI
This is the model **CovidBERT** trained by DeepSet on AllenAI's [CORD19 Dataset](https://pages.semanticscholar.org/coronavirus-research) of scientific articles about coronaviruses.
The model uses the original BERT wordpiece vocabulary and was subsequently fine-tuned on the [SNLI](https://nlp.stanford.edu/projects/snli/) and the [MultiNLI](https://www.nyu.edu/projects/bowman/multinli/) datasets using the [`sentence-transformers` library](https://github.com/UKPLab/sentence-transformers/) to produce universal sentence embeddings [1] using the **average pooling strategy** and a **softmax loss**.
It is further fine-tuned on both MedNLI datasets available at Physionet.
[ACL-BIONLP 2019](https://physionet.org/content/mednli-bionlp19/1.0.1/)
[MedNLI from MIMIC](https://physionet.org/content/mednli/1.0.0/)
Parameter details for the original training on CORD-19 are available on [DeepSet's MLFlow](https://public-mlflow.deepset.ai/#/experiments/2/runs/ba27d00c30044ef6a33b1d307b4a6cba)
**Base model**: `deepset/covid_bert_base` from HuggingFace's `AutoModel`.
|
yhavinga/mt5-base-cnn-nl | yhavinga | 2021-03-05T07:48:08Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"dataset:cnn_dm_nl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-03-02T23:29:05Z | ---
tags:
- summarization
language:
- dutch
datasets:
- cnn_dm_nl
widget:
- text: "(CNN) Skywatchers in West-Noord-Amerika zijn in voor een traktatie: een bijna vijf minuten totale maansverduistering vanmorgen. Hier is hoe het zich ontvouwt:. Het begon om 3:16 a.m. Pacific Daylight Tijd, toen de maan begon te bewegen in de schaduw van de Aarde. Voor het volgende uur en 45 minuten, die schaduw zal bewegen over de maan en verzwolgen het om 4:58 a.m. Pacific Time. De totale verduistering zal slechts vier minuten en 43 seconden duren, en NASA zegt dat maakt het de kortste van de eeuw. Kijken live op NASA TV. Terwijl mensen ten westen van de Mississippi River zal het beste uitzicht hebben, ten minste een gedeeltelijke verduistering zal zichtbaar zijn over de hele natie. Maar zonsopgang zal de show te onderbreken op de Oostkust. Delen van Zuid-Amerika, India, China en China Een maansverduistering gebeurt wanneer de zon, de aarde en de maan een rechte lijn vormen in de ruimte, met de aarde in het midden. De zon schijnt op de Aarde en creëert een schaduw. Als de maan dieper in die schaduw beweegt, lijkt het donker te worden en lijkt zelfs een roodachtige kleur te zijn. Waarom rood? Omdat de atmosfeer van de Aarde het grootste deel van het blauwe licht filtert. Sommige mensen hebben het effect van de \"bloedmaan\" bijgenaamd. NASA zegt dat maansverduisteringen meestal ten minste twee keer per jaar plaatsvinden, maar deze verduistering is de derde in een reeks van vier op een rij, bekend als een \"tetrad.\" De eerste was op 15 april 2014. De tweede was in september 2014, de volgende is zaterdag en er zal er een meer zijn, op 28 september. Als je meer wilt weten over de verduistering, NASA astronoom Mitzi Adam. Deel uw foto's met CNN iReport."
- text: "(CNN) Filipino's worden gewaarschuwd om op wacht te staan voor flash overstromingen en aardverschuivingen als tropische storm Maysak benaderde de Aziatische eiland natie zaterdag. Slechts een paar dagen geleden, Maysak kreeg super tyfoon status dankzij zijn aanhoudende 150 km/h winden. Het heeft sindsdien verloren veel stoom als het naar het westen in de Stille Oceaan heeft gedraaid. Het is nu geclassificeerd als een tropische storm, volgens de Filipijnse nationale weerdienst, die noemt het een andere naam, Chedeng. Het heeft stabiele winden van meer dan 70 km/h (115 km/h) en gusts tot 90 km/h vanaf 17.00 uur (5 uur ET) Zaterdag. Toch, dat betekent niet dat Maysak zal geen pak een wallop. Autoriteiten nam preventieve stappen om mensen veilig te houden zoals barring outdoor activiteiten zoals zwemmen, surfen, di. Gabriel Llave, een ramp ambtenaar, vertelde PNA dat toeristen die aankomen zaterdag in en rond de kustplaats van Aurora \"zal niet worden geaccepteerd door de eigenaren van hotels, resorts, herbergen en dergelijke... en zal worden geadviseerd om terug te keren naar hun respectievelijke plaatsen.\" Aldczar Aurelio, een meteoroloog met de Filippijnse Atmosferische, Geofysische en Astronomische Diensten Administratie (PAGASA), zei dat de storm was gecentreerd 200 mijl ten zuidwesten van de provincie Aurora vanaf 5 uur (5 uur ET) en richting het westen op een 12.5 mph clip. Het is verwacht dat landval zondagochtend maken op de zuidoostelijke kust van de provincie Isabela en zijn uit de Filippijnen tegen maandag. Ahead van de storm. Isabela Gov. Faustino Dry III waarschuwde zaterdag dat bewoners moet handelen als deze zal maken landfall zondagochtend op de zuidoostelijke kust van de provincie Isabela en zijn uit de Filippijnen voor maandag."
---
# mt5-base-cnn-nl
mt5-base finetuned on CNN DM translated to nl (Dutch).
* Learning rate 1e-3
* Trained for 1 epoch
* Max source length 1024
* Max target length 142
* rouge1 31.1766
* rouge2 8.4538
* rougeL 17.8674
|
patrickvonplaten/wav2vec2-base-100h-13K-steps | patrickvonplaten | 2021-03-03T13:11:07Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | Fine-tuning of `wav2vec2-base` on 100h of Librispeech training data. Results on "clean" data are very similar to the ones of the [official model](https://huggingface.co/facebook/wav2vec2-base-100h). However, the result on "other" is significantly worse - the model seems to have overfitting to the "clean" data.
Model was trained on *librispeech-clean-train.100* with following hyper-parameters:
- 2 GPUs Titan RTX
- Total update steps 13000
- Batch size per GPU: 32 corresponding to a *total batch size* of ca. ~1500 seconds
- Adam with linear decaying learning rate with 3000 warmup steps
- dynamic grouping for batch
- fp16
- attention_mask was **not** used during training
Check: https://wandb.ai/patrickvonplaten/huggingface/reports/Project-Dashboard--Vmlldzo1MDI2MTU?accessToken=69z0mrkoxs1msgh71p4nntr9shi6mll8rhtbo6c56yynygw0scp11d8z9o1xd0uk
*Result (WER)* on Librispeech test:
| "clean" | "other" |
|---|---|
| 6.5 | 18.7 | |
hfl/chinese-xlnet-mid | hfl | 2021-03-03T01:46:39Z | 30 | 9 | transformers | [
"transformers",
"pytorch",
"tf",
"xlnet",
"text-generation",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language:
- zh
license: "apache-2.0"
---
## Chinese Pre-Trained XLNet
This project provides a XLNet pre-training model for Chinese, which aims to enrich Chinese natural language processing resources and provide a variety of Chinese pre-training model selection.
We welcome all experts and scholars to download and use this model.
This project is based on CMU/Google official XLNet: https://github.com/zihangdai/xlnet
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
``` |
hfl/chinese-xlnet-base | hfl | 2021-03-03T01:44:59Z | 330 | 30 | transformers | [
"transformers",
"pytorch",
"tf",
"xlnet",
"text-generation",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language:
- zh
license: "apache-2.0"
---
## Chinese Pre-Trained XLNet
This project provides a XLNet pre-training model for Chinese, which aims to enrich Chinese natural language processing resources and provide a variety of Chinese pre-training model selection.
We welcome all experts and scholars to download and use this model.
This project is based on CMU/Google official XLNet: https://github.com/zihangdai/xlnet
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
``` |
hfl/chinese-electra-large-discriminator | hfl | 2021-03-03T01:42:48Z | 10 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"electra",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language:
- zh
license: "apache-2.0"
---
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.**
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
|
hfl/chinese-electra-small-ex-discriminator | hfl | 2021-03-03T01:39:26Z | 2 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language:
- zh
license: "apache-2.0"
---
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.**
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
|
hfl/chinese-electra-small-ex-generator | hfl | 2021-03-03T01:39:16Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"fill-mask",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language:
- zh
license: "apache-2.0"
pipeline_tag: "fill-mask"
---
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.**
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
|
hfl/chinese-electra-small-generator | hfl | 2021-03-03T01:38:55Z | 688 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"electra",
"fill-mask",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language:
- zh
license: "apache-2.0"
pipeline_tag: "fill-mask"
---
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.**
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
|
hfl/chinese-electra-180g-large-generator | hfl | 2021-03-03T01:27:24Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"electra",
"fill-mask",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language:
- zh
license: "apache-2.0"
pipeline_tag: "fill-mask"
---
# This model is trained on 180G data, we recommend using this one than the original version.
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
``` |
hfl/chinese-electra-180g-base-generator | hfl | 2021-03-03T01:26:40Z | 19 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"electra",
"fill-mask",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language:
- zh
license: "apache-2.0"
pipeline_tag: "fill-mask"
---
# This model is trained on 180G data, we recommend using this one than the original version.
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
``` |
hfl/chinese-electra-180g-base-discriminator | hfl | 2021-03-03T01:26:14Z | 1,185 | 11 | transformers | [
"transformers",
"pytorch",
"tf",
"electra",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language:
- zh
license: "apache-2.0"
---
# This model is trained on 180G data, we recommend using this one than the original version.
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
``` |
hfl/chinese-electra-180g-small-ex-generator | hfl | 2021-03-03T01:25:06Z | 1 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"electra",
"fill-mask",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language:
- zh
license: "apache-2.0"
pipeline_tag: "fill-mask"
---
# This model is trained on 180G data, we recommend using this one than the original version.
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
``` |
hfl/chinese-electra-180g-small-generator | hfl | 2021-03-03T01:23:58Z | 6 | 4 | transformers | [
"transformers",
"pytorch",
"tf",
"electra",
"fill-mask",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language:
- zh
license: "apache-2.0"
pipeline_tag: "fill-mask"
---
# This model is trained on 180G data, we recommend using this one than the original version.
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
``` |
flair/ner-dutch | flair | 2021-03-02T22:03:57Z | 316 | 3 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"nl",
"dataset:conll2003",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: nl
datasets:
- conll2003
widget:
- text: "George Washington ging naar Washington."
---
# Dutch NER in Flair (default model)
This is the standard 4-class NER model for Dutch that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **92,58** (CoNLL-03)
Predicts 4 tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
| PER | person name |
| LOC | location name |
| ORG | organization name |
| MISC | other name |
Based on Transformer embeddings and LSTM-CRF.
---
# Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-dutch")
# make example sentence
sentence = Sentence("George Washington ging naar Washington")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [1,2]: "George Washington" [− Labels: PER (0.997)]
Span [5]: "Washington" [− Labels: LOC (0.9996)]
```
So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging naar Washington*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
from flair.data import Corpus
from flair.datasets import CONLL_03_DUTCH
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. get the corpus
corpus: Corpus = CONLL_03_DUTCH()
# 2. what tag do we want to predict?
tag_type = 'ner'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize embeddings
embeddings = TransformerWordEmbeddings('wietsedv/bert-base-dutch-cased')
# 5. initialize sequence tagger
tagger: SequenceTagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type)
# 6. initialize trainer
trainer: ModelTrainer = ModelTrainer(tagger, corpus)
# 7. run training
trainer.train('resources/taggers/ner-dutch',
train_with_dev=True,
max_epochs=150)
```
---
### Cite
Please cite the following paper when using this model.
```
@inproceedings{akbik-etal-2019-flair,
title = "{FLAIR}: An Easy-to-Use Framework for State-of-the-Art {NLP}",
author = "Akbik, Alan and
Bergmann, Tanja and
Blythe, Duncan and
Rasul, Kashif and
Schweter, Stefan and
Vollgraf, Roland",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics (Demonstrations)",
year = "2019",
url = "https://www.aclweb.org/anthology/N19-4010",
pages = "54--59",
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
stefan-it/flair-distilbert-ner-germeval14 | stefan-it | 2021-03-02T18:32:30Z | 10 | 1 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"dataset:germeval_14",
"license:mit",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
datasets:
- germeval_14
tags:
- flair
- token-classification
- sequence-tagger-model
language: de
widget:
- text: "Hugging Face ist eine französische Firma mit Sitz in New York."
license: mit
---
# Flair NER model trained on GermEval14 dataset
This model was trained on the official [GermEval14](https://sites.google.com/site/germeval2014ner/data)
dataset using the [Flair](https://github.com/flairNLP/flair) framework.
It uses a fine-tuned German DistilBERT model from [here](https://huggingface.co/distilbert-base-german-cased).
# Results
| Dataset \ Run | Run 1 | Run 2 | Run 3† | Run 4 | Run 5 | Avg.
| ------------- | ----- | ----- | --------- | ----- | ----- | ----
| Development | 87.05 | 86.52 | **87.34** | 86.85 | 86.46 | 86.84
| Test | 85.43 | 85.88 | 85.72 | 85.47 | 85.62 | 85.62
† denotes that this model is selected for upload.
# Flair Fine-Tuning
We used the following script to fine-tune the model on the GermEval14 dataset:
```python
from argparse import ArgumentParser
import torch, flair
# dataset, model and embedding imports
from flair.datasets import GERMEVAL_14
from flair.embeddings import TransformerWordEmbeddings
from flair.models import SequenceTagger
from flair.trainers import ModelTrainer
if __name__ == "__main__":
# All arguments that can be passed
parser = ArgumentParser()
parser.add_argument("-s", "--seeds", nargs='+', type=int, default='42') # pass list of seeds for experiments
parser.add_argument("-c", "--cuda", type=int, default=0, help="CUDA device") # which cuda device to use
parser.add_argument("-m", "--model", type=str, help="Model name (such as Hugging Face model hub name")
# Parse experimental arguments
args = parser.parse_args()
# use cuda device as passed
flair.device = f'cuda:{str(args.cuda)}'
# for each passed seed, do one experimental run
for seed in args.seeds:
flair.set_seed(seed)
# model
hf_model = args.model
# initialize embeddings
embeddings = TransformerWordEmbeddings(
model=hf_model,
layers="-1",
subtoken_pooling="first",
fine_tune=True,
use_context=False,
respect_document_boundaries=False,
)
# select dataset depending on which language variable is passed
corpus = GERMEVAL_14()
# make the dictionary of tags to predict
tag_dictionary = corpus.make_tag_dictionary('ner')
# init bare-bones sequence tagger (no reprojection, LSTM or CRF)
tagger: SequenceTagger = SequenceTagger(
hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type='ner',
use_crf=False,
use_rnn=False,
reproject_embeddings=False,
)
# init the model trainer
trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW)
# make string for output folder
output_folder = f"flert-ner-{hf_model}-{seed}"
# train with XLM parameters (AdamW, 20 epochs, small LR)
from torch.optim.lr_scheduler import OneCycleLR
trainer.train(
output_folder,
learning_rate=5.0e-5,
mini_batch_size=16,
mini_batch_chunk_size=1,
max_epochs=10,
scheduler=OneCycleLR,
embeddings_storage_mode='none',
weight_decay=0.,
train_with_dev=False,
)
```
|
sarnikowski/convbert-small-da-cased | sarnikowski | 2021-03-01T22:15:15Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"convbert",
"da",
"arxiv:2008.02496",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: da
license: cc-by-4.0
---
# Danish ConvBERT small (cased)
[ConvBERT](https://arxiv.org/abs/2008.02496) model pretrained on a custom Danish corpus (~17.5gb).
For details regarding data sources and training procedure, along with benchmarks on downstream tasks, go to: https://github.com/sarnikowski/danish_transformers
## Usage
```python
from transformers import ConvBertTokenizer, ConvBertModel
tokenizer = ConvBertTokenizer.from_pretrained("sarnikowski/convbert-small-da-cased")
model = ConvBertModel.from_pretrained("sarnikowski/convbert-small-da-cased")
```
## Questions?
If you have any questions feel free to open an issue on the [danish_transformers](https://github.com/sarnikowski/danish_transformers) repository, or send an email to [email protected]
|
nsi319/legal-led-base-16384 | nsi319 | 2021-03-01T12:33:48Z | 298 | 13 | transformers | [
"transformers",
"pytorch",
"led",
"text2text-generation",
"summarization",
"en",
"license:mit",
"autotrain_compatible",
"region:us"
] | summarization | 2022-03-02T23:29:05Z | ---
language: en
tags: summarization
metrics:
- rouge
- precision
inference: false
license: mit
---
## LED for legal summarization of documents
This is a Longformer Encoder Decoder ([led-base-16384](https://huggingface.co/allenai/led-base-16384)) model for the **legal domain**, trained for **long document abstractive summarization** task. The length of the document can be upto 16,384 tokens.
## Training data
The **legal-led-base-16384** model was trained on [sec-litigation-releases](https://www.sec.gov/litigation/litreleases.htm) dataset consisting more than 2700 litigation releases and complaints.
## How to use
```Python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("nsi319/legal-led-base-16384")
model = AutoModelForSeq2SeqLM.from_pretrained("nsi319/legal-led-base-16384")
padding = "max_length"
text="""On March 2, 2018, the Securities and Exchange Commission announced securities fraud charges against a U.K.-based broker-dealer and its investment manager in connection with manipulative trading in the securities of HD View 360 Inc., a U.S.-based microcap issuer. The SEC also announced charges against HD View's CEO, another individual, and three entities they control for manipulating HD View's securities as well as the securities of another microcap issuer, West Coast Ventures Group Corp. The SEC further announced the institution of an order suspending trading in the securities of HD View.These charges arise in part from an undercover operation by the Federal Bureau of Investigation, which also resulted in related criminal prosecutions against these defendants by the Office of the United States Attorney for the Eastern District of New York.In a complaint filed in the U.S. District Court for the Eastern District of New York, the SEC alleges that Beaufort Securities Ltd. and Peter Kyriacou, an investment manager at Beaufort, manipulated the market for HD View's common stock. The scheme involved an undercover FBI agent who described his business as manipulating U.S. stocks through pump-and-dump schemes. Kyriacou and the agent discussed depositing large blocks of microcap stock in Beaufort accounts, driving up the price of the stock through promotions, manipulating the stock's price and volume through matched trades, and then selling the shares for a large profit.The SEC's complaint against Beaufort and Kyriacou alleges that they:opened brokerage accounts for the undercover agent in the names of nominees in order to conceal his identity and his connection to the anticipated trading activity in the accounts suggested that the undercover agent could create the false appearance that HD View's stock was liquid in advance of a pump-and-dump by "gam[ing] the market" through matched trades executed multiple purchase orders of HD View shares with the understanding that Beaufort's client had arranged for an associate to simultaneously offer an equivalent number of shares at the same priceA second complaint filed by the SEC in the U.S. District Court for the Eastern District of New York alleges that in a series of recorded telephone conversations with the undercover agent, HD View CEO Dennis Mancino and William T. Hirschy agreed to manipulate HD View's common stock by using the agent's network of brokers to generate fraudulent retail demand for the stock in exchange for a kickback from the trading proceeds. According to the complaint, the three men agreed that Mancino and Hirschy would manipulate HD View stock to a higher price before using the agent's brokers to liquidate their positions at an artificially inflated price. The SEC's complaint also alleges that Mancino and Hirschy executed a "test trade" on Jan. 31, 2018, coordinated by the agent, consisting of a sell order placed by the defendants filled by an opposing purchase order placed by a broker into an account at Beaufort. Unbeknownst to Mancino and Hirschy, the Beaufort account used for this trade was a nominal account that was opened and funded by the agent. The SEC's complaint also alleges that, prior to their contact with the undercover agent, Mancino and Hirschy manipulated the market for HD View and for West Coast by using brokerage accounts that they owned, controlled, or were associated with –including TJM Investments Inc., DJK Investments 10 Inc., WT Consulting Group LLC – to effect manipulative "matched trades."The SEC's complaint against Beaufort and Kyriacou charges the defendants with violating Section 10(b) of the Securities Exchange Act of 1934 and Rule 10b-5 thereunder. The SEC also charged Hirschy, Mancino, and their corporate entities with violating Section 17(a)(1) of the Securities Act of 1933, Sections 9(a)(1), 9(a)(2), and 10(b) of the Exchange Act and Rules 10b-5(a) and (c) thereunder. The SEC is seeking injunctions, disgorgement, prejudgment interest, penalties, and penny stock bars from Beaufort and Kyriacou. With respect to Hirschy, Mancino, and their corporate entities, the SEC is seeking injunctions, disgorgement, prejudgment interest, penalties, penny stock bars, and an officer-and-director bar against Mancino.The investigation was conducted in the SEC's New York Regional Office by Tejal Shah and Joseph Darragh, Lorraine Collazo, and Michael D. Paley of the Microcap Fraud Task Force and supervised by Lara S. Mehraban, and in Washington, D.C. by Patrick L. Feeney, Robert Nesbitt, and Kevin Guerrero, and supervised by Antonia Chion. Preethi Krishnamurthy and Ms. Shah will lead the SEC's litigation against Beaufort and Kyriacou. Ann H. Petalas and Mr. Feeney, under the supervision of Cheryl Crumpton, will handle the SEC's litigation against Mancino, Hirschy, and their entities. The SEC appreciates the assistance of the Office of the United States Attorney for the Eastern District of New York, the Federal Bureau of Investigation, the Internal Revenue Service, the Alberta Securities Commission, the Ontario Securities Commission, the Financial Conduct Authority of the United Kingdom, and the Financial Industry Regulatory Authority.The Commission's investigation in this matter is continuing."""
input_tokenized = tokenizer.encode(text, return_tensors='pt',padding=padding,pad_to_max_length=True, max_length=6144,truncation=True)
summary_ids = model.generate(input_tokenized,
num_beams=4,
no_repeat_ngram_size=3,
length_penalty=2,
min_length=350,
max_length=500)
summary = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids][0]
### Summary Output
# On March 2, 2018, the Securities and Exchange Commission charged Beaufort Securities Ltd. and Peter Kyriacou, an investment manager at Beaufort, with manipulating the market for HD View 360 Inc., a U.S.-based microcap issuer. The SEC also announced charges against HD View's CEO, another individual, and three entities they control for manipulating HD View through pump-and-dump schemes. According to the SEC's complaint, the defendants discussed depositing large blocks of microcap stock in Beaufort accounts, driving up the price of the stock through promotions, manipulating the stock's price and volume through matched trades, and then selling the shares for a large profit. In a parallel action, the United States Attorney's Office for the Eastern District of New York announced criminal charges against the defendants. On March 4, the SEC announced the entry of an order suspending trading in the securities of HD View and for West Coast, pending the outcome of a parallel criminal action by the Federal Bureau of Investigation. Following the announcement of the suspension, HD View stock prices and volume increased significantly, and the defendants agreed to pay over $1.5 million in disgorgement, prejudgment interest, penalties, and an officer and director bar. Beaufort agreed to settle the charges without admitting or denying the allegations of the complaint, and to pay a $1 million civil penalty. The SEC's investigation, which is continuing, has been conducted by Patrick McCluskey and Cheryl Crumpton of the SEC Enforcement Division's Market Abuse Unit in the New York Regional Office. The SEC appreciates the assistance of the Financial Industry Regulatory Authority of the United Kingdom, the Canadian Securities Commission, the Alberta Securities Commission and the Ontario Securities Commission.
```
## Evaluation results
When the model is used for summarizing legal documents, it achieves the following results:
| Model | rouge1 | rouge1-precision | rouge2 | rouge2-precision | rougeL | rougeL-precision |
|:-----------:|:-----:|:-----:|:------:|:-----:|:------:|:-----:|
| legal-led-base-16384 | **55.69** | **61.73** | **29.03** | **36.68** | **32.65** | **40.43** |
| led-base-16384 | 29.19 | 30.43 | 15.23 | 16.27 | 16.32 | 16.58 |
|
CouchCat/ma_ner_v7_distil | CouchCat | 2021-02-28T20:54:46Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"ner",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language: en
license: mit
tags:
- ner
widget:
- text: "These shoes I recently bought from Tommy Hilfiger fit quite well. The shirt, however, has got a hole"
---
### Description
A Named Entity Recognition model trained on a customer feedback data using DistilBert.
Possible labels are in BIO-notation. Performance of the PERS tag could be better because of low data samples:
- PROD: for certain products
- BRND: for brands
- PERS: people names
The following tags are simply in place to help better categorize the previous tags
- MATR: relating to materials, e.g. cloth, leather, seam, etc.
- TIME: time related entities
- MISC: any other entity that might skew the results
### Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_ner_v7_distil")
model = AutoModelForTokenClassification.from_pretrained("CouchCat/ma_ner_v7_distil")
```
|
aishoo1612/VADER-With-heatmaps | aishoo1612 | 2021-02-28T11:46:32Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | pip install vaderSentiment
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
analyser = SentimentIntensityAnalyzer()
analyser.polarity_scores("I hate watching movies")
import nltk
from nltk.tokenize import word_tokenize, RegexpTokenizer
from nltk.sentiment.vader import SentimentIntensityAnalyzer
nltk.download('all')
import numpy as np
sentence = """I love dancing & painting"""
tokenized_sentence = nltk.word_tokenize(sentence)
from nltk import word_tokenize
from typing import List
Analyzer = SentimentIntensityAnalyzer()
pos_word_list=[]
neu_word_list=[]
neg_word_list=[]
pos_score_list=[]
neg_score_list=[]
score_list=[]
for word in tokenized_sentence:
if (Analyzer.polarity_scores(word)['compound']) >= 0.1:
pos_word_list.append(word)
score_list.append(Analyzer.polarity_scores(word)['compound'])
elif (Analyzer.polarity_scores(word)['compound']) <= -0.1:
neg_word_list.append(word)
score_list.append(Analyzer.polarity_scores(word)['compound'])
else:
neu_word_list.append(word)
score_list.append(Analyzer.polarity_scores(word)['compound'])
print('Positive:',pos_word_list)
print('Neutral:',neu_word_list)
print('Negative:',neg_word_list)
print('Score:', score_list)
score = Analyzer.polarity_scores(sentence)
print('\nScores:', score)
predict_log=score.values()
value_iterator=iter(predict_log)
neg_prediction=next(value_iterator)
neu_prediction=next(value_iterator)
pos_prediction=next(value_iterator)
prediction_list=[neg_prediction, pos_prediction]
prediction_list_array=np.array(prediction_list)
def predict():
probs = []
for text in texts:
offset = (self.score(text) + 1) / 2.
binned = np.digitize(5 * offset, self.classes) + 1
simulated_probs = scipy.stats.norm.pdf(self.classes, binned, scale=0.5)
probs.append(simulated_probs)
return np.array(probs)
latex_special_token = ["!@#$%^&*()"]
import operator
def generate(text_list, attention_list, latex_file, color_neg='red', color_pos='green', rescale_value = False):
print("hello")
attention_list = rescale(attention_list)
word_num = len(text_list)
print(len(attention_list))
print(len(text_list))
text_list = clean_word(text_list)
with open(latex_file,'w') as f:
f.write(r'''\documentclass[varwidth]{standalone}
\special{papersize=210mm,297mm}
\usepackage{color}
\usepackage{tcolorbox}
\usepackage{CJK}
\usepackage{adjustbox}
\tcbset{width=0.9\textwidth,boxrule=0pt,colback=red,arc=0pt,auto outer arc,left=0pt,right=0pt,boxsep=5pt}
\begin{document}
\begin{CJK*}{UTF8}{gbsn}'''+'\n')
string = r'''{\setlength{\fboxsep}{0pt}\colorbox{white!0}{\parbox{0.9\textwidth}{'''+"\n"
for idx in range(len(attention_list)):
if attention_list[idx] > 0:
string += "\\colorbox{%s!%s}{"%(color_pos, attention_list[idx])+"\\strut " + text_list[idx]+"} "
else:
string += "\\colorbox{%s!%s}{"%(color_neg, -attention_list[idx])+"\\strut " + text_list[idx]+"} "
string += "\n}}}"
f.write(string+'\n')
f.write(r'''\end{CJK*}
\end{document}''')
def rescale(input_list):
the_array = np.asarray(input_list)
the_max = np.max(abs(the_array))
rescale = the_array/the_max
rescale = rescale*100
rescale = np.round(rescale, 3)
'''
the_array = np.asarray(input_list)
the_max = np.max(the_array)
the_min = np.min(the_array)
rescale = ((the_array - the_min)/(the_max-the_min))*100
for i in rescale:
print(rescale)
'''
return rescale.tolist()
def clean_word(word_list):
new_word_list = []
for word in word_list:
for latex_sensitive in ["\\", "%", "&", "^", "#", "_", "{", "}"]:
if latex_sensitive in word:
word = word.replace(latex_sensitive, '\\'+latex_sensitive)
new_word_list.append(word)
return new_word_list
if __name__ == '__main__':
color_1 = 'red'
color_2 = 'green'
words = word_tokenize(sentence)
word_num = len(words)
generate(words, score_list, "sple.tex", color_1, color_2)
|
atahmasb/tf-layoutlm-base-uncased | atahmasb | 2021-02-26T20:27:54Z | 5 | 0 | transformers | [
"transformers",
"tf",
"layoutlm",
"arxiv:1912.13318",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | # LayoutLM
## Model description
LayoutLM is a simple but effective pre-training method of text and layout for document image understanding and information extraction tasks, such as form understanding and receipt understanding. LayoutLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:
[LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318)
Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, [KDD 2020](https://www.kdd.org/kdd2020/accepted-papers)
## Training data
We pre-train LayoutLM on IIT-CDIP Test Collection 1.0\* dataset with two settings.
* LayoutLM-Base, Uncased (11M documents, 2 epochs): 12-layer, 768-hidden, 12-heads, 113M parameters **(This Model)**
* LayoutLM-Large, Uncased (11M documents, 2 epochs): 24-layer, 1024-hidden, 16-heads, 343M parameters
## Citation
If you find LayoutLM useful in your research, please cite the following paper:
``` latex
@misc{xu2019layoutlm,
title={LayoutLM: Pre-training of Text and Layout for Document Image Understanding},
author={Yiheng Xu and Minghao Li and Lei Cui and Shaohan Huang and Furu Wei and Ming Zhou},
year={2019},
eprint={1912.13318},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
valhalla/s2t_librispeech_medium | valhalla | 2021-02-26T14:24:39Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"speech_to_text_transformer",
"text2text-generation",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
license: apache-2.0
---
TODO: [To be filled]
## Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) *"clean"* and *"other"* test dataset.
```python
from datasets import load_dataset
from transformers import Speech2TextTransformerForConditionalGeneration, Speech2TextTransformerTokenizer
import soundfile as sf
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
model = Speech2TextTransformerForConditionalGeneration.from_pretrained("valhalla/s2t_librispeech_medium").to("cuda")
tokenizer = Speech2TextTransformerTokenizer.from_pretrained("valhalla/s2t_librispeech_medium", do_upper_case=True)
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
features = tokenizer(batch["speech"], sample_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)
batch["transcription"] = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 3.5 | 7.8 | |
sismetanin/xlm_roberta_base-ru-sentiment-rusentiment | sismetanin | 2021-02-25T23:57:49Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"sentiment analysis",
"Russian",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## XML-RoBERTa-Base-ru-sentiment-RuSentiment
XML-RoBERTa-Base-ru-sentiment-RuSentiment is a [XML-RoBERTa-Base](https://huggingface.co/xlm-roberta-base) model fine-tuned on [RuSentiment dataset](https://github.com/text-machine-lab/rusentiment) of general-domain Russian-language posts from the largest Russian social network, VKontakte.
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>wighted</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{Smetanin2021Deep,
author = {Sergey Smetanin and Mikhail Komarov},
title = {Deep transfer learning baselines for sentiment analysis in Russian},
journal = {Information Processing & Management},
volume = {58},
number = {3},
pages = {102484},
year = {2021},
issn = {0306-4573},
doi = {0.1016/j.ipm.2020.102484}
}
```
Dataset:
```
@inproceedings{rogers2018rusentiment,
title={RuSentiment: An enriched sentiment analysis dataset for social media in Russian},
author={Rogers, Anna and Romanov, Alexey and Rumshisky, Anna and Volkova, Svitlana and Gronas, Mikhail and Gribov, Alex},
booktitle={Proceedings of the 27th international conference on computational linguistics},
pages={755--763},
year={2018}
}
``` |
sismetanin/mbart_ru_sum_gazeta-ru-sentiment-rusentiment | sismetanin | 2021-02-25T23:56:23Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text-classification",
"sentiment analysis",
"Russian",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## MBARTRuSumGazeta-ru-sentiment-RuSentiment
MBARTRuSumGazeta-ru-sentiment-RuSentiment is a [MBARTRuSumGazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) model fine-tuned on [RuSentiment dataset](https://github.com/text-machine-lab/rusentiment) of general-domain Russian-language posts from the largest Russian social network, VKontakte.
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>wighted</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{Smetanin2021Deep,
author = {Sergey Smetanin and Mikhail Komarov},
title = {Deep transfer learning baselines for sentiment analysis in Russian},
journal = {Information Processing & Management},
volume = {58},
number = {3},
pages = {102484},
year = {2021},
issn = {0306-4573},
doi = {0.1016/j.ipm.2020.102484}
}
```
Dataset:
```
@inproceedings{rogers2018rusentiment,
title={RuSentiment: An enriched sentiment analysis dataset for social media in Russian},
author={Rogers, Anna and Romanov, Alexey and Rumshisky, Anna and Volkova, Svitlana and Gronas, Mikhail and Gribov, Alex},
booktitle={Proceedings of the 27th international conference on computational linguistics},
pages={755--763},
year={2018}
}
``` |
sismetanin/xlm_roberta_base-ru-sentiment-rureviews | sismetanin | 2021-02-25T23:51:22Z | 23 | 1 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"sentiment analysis",
"Russian",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## XLM-RoBERTa-Base-ru-sentiment-RuReviews
XLM-RoBERTa-Base-ru-sentiment-RuReviews is a [XLM-RoBERTa-Base](https://huggingface.co/xlm-roberta-base) model fine-tuned on [RuReviews dataset](https://github.com/sismetanin/rureviews) of Russian-language reviews from the ”Women’s Clothes and Accessories” product category on the primary e-commerce site in Russia.
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>wighted</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{Smetanin2021Deep,
author = {Sergey Smetanin and Mikhail Komarov},
title = {Deep transfer learning baselines for sentiment analysis in Russian},
journal = {Information Processing & Management},
volume = {58},
number = {3},
pages = {102484},
year = {2021},
issn = {0306-4573},
doi = {0.1016/j.ipm.2020.102484}
}
```
Dataset:
```
@INPROCEEDINGS{Smetanin2019Sentiment,
author={Sergey Smetanin and Michail Komarov},
booktitle={2019 IEEE 21st Conference on Business Informatics (CBI)},
title={Sentiment Analysis of Product Reviews in Russian using Convolutional Neural Networks},
year={2019},
volume={01},
pages={482-486},
doi={10.1109/CBI.2019.00062},
ISSN={2378-1963},
month={July}
}
``` |
sismetanin/mbart_ru_sum_gazeta-ru-sentiment-rureviews | sismetanin | 2021-02-25T23:49:57Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text-classification",
"sentiment analysis",
"Russian",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## MBARTRuSumGazeta-ru-sentiment-RuReviews
MBARTRuSumGazeta-ru-sentiment-RuReviews is a [MBARTRuSumGazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) model fine-tuned on [RuReviews dataset](https://github.com/sismetanin/rureviews) of Russian-language reviews from the ”Women’s Clothes and Accessories” product category on the primary e-commerce site in Russia.
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>wighted</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{Smetanin2021Deep,
author = {Sergey Smetanin and Mikhail Komarov},
title = {Deep transfer learning baselines for sentiment analysis in Russian},
journal = {Information Processing & Management},
volume = {58},
number = {3},
pages = {102484},
year = {2021},
issn = {0306-4573},
doi = {0.1016/j.ipm.2020.102484}
}
```
Dataset:
```
@INPROCEEDINGS{Smetanin2019Sentiment,
author={Sergey Smetanin and Michail Komarov},
booktitle={2019 IEEE 21st Conference on Business Informatics (CBI)},
title={Sentiment Analysis of Product Reviews in Russian using Convolutional Neural Networks},
year={2019},
volume={01},
pages={482-486},
doi={10.1109/CBI.2019.00062},
ISSN={2378-1963},
month={July}
}
``` |
superspray/distilbert_base_squad2_custom_dataset | superspray | 2021-02-20T07:33:31Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | # Question & Answering Model for 'Save Your Minutes' from Dobby-AI
Distilbert_Base fine-tuned on SQuAD2.0 and custom QA dataset
This model is [twmkn9/distilbert-base-uncased-squad2] trained on additional custom dataset as:
```
!python3 run_squad.py --model_type distilbert \
--model_name_or_path /content/distilbert_base_384 \
--do_lower_case \
--output_dir /content/model/\
--do_train \
--train_file $data_dir/additional_qa.json\
--version_2_with_negative \
--do_lower_case \
--num_train_epochs 3 \
--weight_decay 0.01 \
--learning_rate 3e-5 \
--max_grad_norm 0.5 \
--adam_epsilon 1e-6 \
--max_seq_length 512 \
--doc_stride 128 \
--threads 12 \
--logging_steps 50 \
--save_steps 1000 \
--overwrite_output_dir \
--per_gpu_train_batch_size 4
```
We used Google Colab for training the model, |
joeddav/distilbert-base-uncased-go-emotions-student | joeddav | 2021-02-19T22:15:52Z | 23,165 | 72 | transformers | [
"transformers",
"pytorch",
"tf",
"distilbert",
"text-classification",
"tensorflow",
"en",
"dataset:go_emotions",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language: en
tags:
- text-classification
- pytorch
- tensorflow
datasets:
- go_emotions
license: mit
widget:
- text: "I feel lucky to be here."
---
# distilbert-base-uncased-go-emotions-student
## Model Description
This model is distilled from the zero-shot classification pipeline on the unlabeled GoEmotions dataset using [this
script](https://github.com/huggingface/transformers/tree/master/examples/research_projects/zero-shot-distillation).
It was trained with mixed precision for 10 epochs and otherwise used the default script arguments.
## Intended Usage
The model can be used like any other model trained on GoEmotions, but will likely not perform as well as a model
trained with full supervision. It is primarily intended as a demo of how an expensive NLI-based zero-shot model
can be distilled to a more efficient student, allowing a classifier to be trained with only unlabeled data. Note
that although the GoEmotions dataset allow multiple labels per instance, the teacher used single-label
classification to create psuedo-labels.
|
joeddav/distilbert-base-uncased-agnews-student | joeddav | 2021-02-18T20:41:19Z | 14 | 5 | transformers | [
"transformers",
"pytorch",
"tf",
"distilbert",
"text-classification",
"tensorflow",
"en",
"dataset:ag_news",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language: en
tags:
- text-classification
- pytorch
- tensorflow
datasets:
- ag_news
license: mit
widget:
- text: "Armed conflict has been a near-constant policial and economic burden."
- text: "Tom Brady won his seventh Super Bowl last night."
- text: "Dow falls more than 100 points after disappointing jobs data"
- text: "A new moon has been discovered in Jupter's orbit."
---
# distilbert-base-uncased-agnews-student
## Model Description
This model is distilled from the zero-shot classification pipeline on the unlabeled AG's News dataset using [this
script](https://github.com/huggingface/transformers/tree/master/examples/research_projects/zero-shot-distillation).
It is the result of the demo notebook
[here](https://colab.research.google.com/drive/1mjBjd0cR8G57ZpsnFCS3ngGyo5nCa9ya?usp=sharing), where more details
about the model can be found.
- Teacher model: [roberta-large-mnli](https://huggingface.co/roberta-large-mnli)
- Teacher hypothesis template: `"This text is about {}."`
## Intended Usage
The model can be used like any other model trained on AG's News, but will likely not perform as well as a model
trained with full supervision. It is primarily intended as a demo of how an expensive NLI-based zero-shot model
can be distilled to a more efficient student.
|
Subsets and Splits