modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
facebook/levit-384 | f71571497ab7affc0f78cc89432b0bd94704ec22 | 2022-06-01T13:20:59.000Z | [
"pytorch",
"levit",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2104.01136",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/levit-384 | 188 | null | transformers | 3,700 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# LeViT
LeViT-384 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
](https://arxiv.org/abs/2104.01136) by Graham et al. and first released in [this repository](https://github.com/facebookresearch/LeViT).
Disclaimer: The team releasing LeViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Usage
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import LevitFeatureExtractor, LevitForImageClassificationWithTeacher
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = LevitFeatureExtractor.from_pretrained('facebook/levit-384')
model = LevitForImageClassificationWithTeacher.from_pretrained('facebook/levit-384')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
``` |
Helsinki-NLP/opus-mt-nl-fr | a53c0a8a8bee7266a39fd56d737a6c9996dc1909 | 2021-09-10T13:59:19.000Z | [
"pytorch",
"marian",
"text2text-generation",
"nl",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-nl-fr | 187 | null | transformers | 3,701 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-nl-fr
* source languages: nl
* target languages: fr
* OPUS readme: [nl-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-fr/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fr/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fr/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.nl.fr | 51.3 | 0.674 |
|
J-Chiang/DialoGPT-small-thor | 4baaa45636fc157c7a36ba396e4542d29368eaec | 2021-09-05T16:33:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | J-Chiang | null | J-Chiang/DialoGPT-small-thor | 187 | null | transformers | 3,702 | ---
tags:
- conversational
---
# Thor DialogGPT Model |
deepset/tinybert-6l-768d-squad2 | d2f54c1d54eb6d509bd108987df3e7ebb3d25e6f | 2022-07-26T08:31:16.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad_v2",
"arxiv:1909.10351",
"transformers",
"exbert",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | deepset | null | deepset/tinybert-6l-768d-squad2 | 187 | null | transformers | 3,703 | ---
language: en
datasets:
- squad_v2
license: mit
thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg
tags:
- exbert
model-index:
- name: deepset/tinybert-6l-768d-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 73.8248
verified: true
- name: F1
type: f1
value: 77.1684
verified: true
---
## Overview
**Language model:** deepset/tinybert-6L-768D-squad2
**Language:** English
**Training data:** SQuAD 2.0 training set x 20 augmented + SQuAD 2.0 training set without augmentation
**Eval data:** SQuAD 2.0 dev set
**Infrastructure**: 1x V100 GPU
**Published**: Dec 8th, 2021
## Details
- haystack's intermediate layer and prediction layer distillation features were used for training (based on [TinyBERT](https://arxiv.org/pdf/1909.10351.pdf)). deepset/bert-base-uncased-squad2 was used as the teacher model and huawei-noah/TinyBERT_General_6L_768D was used as the student model.
## Hyperparameters
### Intermediate layer distillation
```
batch_size = 26
n_epochs = 5
max_seq_len = 384
learning_rate = 5e-5
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
temperature = 1
```
### Prediction layer distillation
```
batch_size = 26
n_epochs = 5
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
temperature = 1
distillation_loss_weight = 1.0
```
## Performance
```
"exact": 71.87736882001179
"f1": 76.36111895973675
```
## Authors
- Timo Möller: `timo.moeller [at] deepset.ai`
- Julian Risch: `julian.risch [at] deepset.ai`
- Malte Pietsch: `malte.pietsch [at] deepset.ai`
- Michel Bartels: `michel.bartels [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs) |
naclbit/gpt-j-japanese-6.8b | 135e2c5420171f09636ba25f45b9934c70278728 | 2021-11-10T15:28:57.000Z | [
"gptj",
"text-generation",
"ja",
"arxiv:2104.09864",
"transformers",
"japanese",
"pytorch",
"t5tokenizer",
"sentencepiece",
"license:apache-2.0"
] | text-generation | false | naclbit | null | naclbit/gpt-j-japanese-6.8b | 187 | 3 | transformers | 3,704 | ---
language:
- ja
tags:
- japanese
- text-generation
- gptj
- pytorch
- transformers
- t5tokenizer
- sentencepiece
license: apache-2.0
---
This pre-trained model is work in progress! Model weight download will be available in the future.
A 6.8 billion parameter pre-trained model for Japanese language, based on EleutherAI's Mesh Transformer JAX, that has a similar model structure to their GPT-J-6B pre-trained model.
EleutherAIによるMesh Transformer JAXをコードベースとした、GPT-J-6Bに似たストラクチャと約68.7億パラメータを持つ日本語pre-trainedモデルです。
- We used T5Tokenizer and SentencePiece instead of GPT-2/3 tokenizer. Normalization done by SentencePiece is must for Japanese tokenizing as there are so much many more variations for common symbols than Western languages.
- Tokenizer has a vocabulary of 52,500 tokens and trained on Japanese Wikipedia dump as of 01 Aug 2021.
- The model fits within 16GB VRAM GPUs like P100 for inference up to 1688 context length. Full 2048 context length output requires 20GB VRAM or more (e.g. GTX3090/A5000).
- The model was trained with TPUv3-128 generously provided by Google TRC for about 4 weeks. We are currently formatting additional datasets and preparing for more training time.
## Specifications
| Hyperparameter | Value |
|-------------------|--------|
| n_parameters | 6,876,450,080 |
| n_layers | 32 |
| d_model | 4,096 |
| d_ff | 16,384 |
| n_heads | 16 |
| d_head | 256 |
| n_ctx | 2,048 |
| n_vocab | 52,512 |
| position encoding | [Rotary position encodings (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE dimensions | 64 |
## Instructions
We recommend to use finetuneanon's forked transformer codebase for inferencing as split checkpoint loads up a lot faster than monolithic checkpoint supported by HuggingFace Transformers repository.
The tokenizer still uses 50256 as the <|endoftext|> substitute. Therefore 50256 should be excluded when inferencing.
## Datasets
Lack of quality Japanese corpus was one of the major challenges when we trained the model. We aimed to compile well-formatted corpuses outside of Common Crawl.
The dataset is normalized and sanitized against leading and trailing spaces, excessive CR/LF repetitions.
The whole dataset is about 400GB (as of October 2021) and 106B tokens (compared to 825GB/300B tokens for The Pile).
** Common Crawl
- Jan-Dec 2018 72GB CC100-Japanese (https://metatext.io/datasets/cc100-japanese)
- November 2018 106GB OSCAR-Japanese (https://oscar-corpus.com)
- 75GB Converted 860GB Google C4 Multilingual Japanese (re-formatted)
** Books
- 140GB Web Fictions, non-fictions and blogs corpus
- 5GB Books and Aozora Bunko corpus (weighted 2x)
** News
- 1GB Scientific news, medical news and web news corpus
** Wikipedia
- Aug 2021 3GB Assorted and Deduplicated Japanese Wikipedia (weighted 2x)
- Aug 2021 Wikibooks, Wikinews, Wikiquote, Wikisource, Wiktionary, Wikiversity and Wikivoyage
** Other Corpuses
- 2018 OpenSubtitles (https://opus.nlpl.eu/OpenSubtitles-v2018.php)
- 80-90's BBS Logs
- Assorted Blogs Crawl
- QED-ja
- TED 2020-ja |
razent/spbert-mlm-wso-base | 0ac86fd5e116460e62e422ad9f50736367f3e36c | 2022-03-15T03:25:41.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"code",
"arxiv:2106.09997",
"transformers",
"question-answering",
"knowledge-graph",
"autotrain_compatible"
] | question-answering | false | razent | null | razent/spbert-mlm-wso-base | 187 | null | transformers | 3,705 | ---
language:
- code
tags:
- question-answering
- knowledge-graph
---
# SPBERT MLM+WSO (Initialized)
## Introduction
Paper: [SPBERT: An Efficient Pre-training BERT on SPARQL Queries for Question Answering over Knowledge Graphs](https://arxiv.org/abs/2106.09997)
Authors: _Hieu Tran, Long Phan, James Anibal, Binh T. Nguyen, Truong-Son Nguyen_
## How to use
For more details, do check out [our Github repo](https://github.com/heraclex12/NLP2SPARQL).
Here is an example in Pytorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('razent/spbert-mlm-wso-base')
model = AutoModel.from_pretrained("razent/spbert-mlm-wso-base")
text = "select * where brack_open var_a var_b var_c sep_dot brack_close"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
or Tensorflow
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('razent/spbert-mlm-wso-base')
model = TFAutoModel.from_pretrained("razent/spbert-mlm-wso-base")
text = "select * where brack_open var_a var_b var_c sep_dot brack_close"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Citation
```
@misc{tran2021spbert,
title={SPBERT: An Efficient Pre-training BERT on SPARQL Queries for Question Answering over Knowledge Graphs},
author={Hieu Tran and Long Phan and James Anibal and Binh T. Nguyen and Truong-Son Nguyen},
year={2021},
eprint={2106.09997},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
sismetanin/sbert-ru-sentiment-rureviews | b4aaa41ae90fe37f9caa4c8b769de0720d65f62a | 2021-05-20T06:35:54.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"ru",
"transformers",
"sentiment analysis",
"Russian"
] | text-classification | false | sismetanin | null | sismetanin/sbert-ru-sentiment-rureviews | 187 | 1 | transformers | 3,706 | ---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## SBERT-ru-sentiment-RuReviews
SBERT-ru-sentiment-RuReviews is a [SBERT-Large](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru) model fine-tuned on [RuReviews dataset](https://github.com/sismetanin/rureviews) of Russian-language reviews from the ”Women’s Clothes and Accessories” product category on the primary e-commerce site in Russia.
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>wighted</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{Smetanin2021Deep,
author = {Sergey Smetanin and Mikhail Komarov},
title = {Deep transfer learning baselines for sentiment analysis in Russian},
journal = {Information Processing & Management},
volume = {58},
number = {3},
pages = {102484},
year = {2021},
issn = {0306-4573},
doi = {0.1016/j.ipm.2020.102484}
}
```
Dataset:
```
@INPROCEEDINGS{Smetanin2019Sentiment,
author={Sergey Smetanin and Michail Komarov},
booktitle={2019 IEEE 21st Conference on Business Informatics (CBI)},
title={Sentiment Analysis of Product Reviews in Russian using Convolutional Neural Networks},
year={2019},
volume={01},
pages={482-486},
doi={10.1109/CBI.2019.00062},
ISSN={2378-1963},
month={July}
}
``` |
svalabs/bi-electra-ms-marco-german-uncased | 02c3286af86f4b09bff0d6e59f61581f3b54dbf7 | 2021-06-14T07:46:23.000Z | [
"pytorch",
"electra",
"feature-extraction",
"arxiv:1908.10084",
"arxiv:1611.09268",
"arxiv:2104.08663",
"arxiv:2104.12741",
"transformers"
] | feature-extraction | false | svalabs | null | svalabs/bi-electra-ms-marco-german-uncased | 187 | 3 | transformers | 3,707 | # SVALabs - German Uncased Electra Bi-Encoder
In this repository, we present our german, uncased bi-encoder for Passage Retrieval.
This model was trained on the basis of the german electra uncased model from the [german-nlp-group](https://huggingface.co/german-nlp-group/electra-base-german-uncased) and finetuned as a bi-encoder for Passage Retrieval using the [sentence-transformers](https://github.com/UKPLab/sentence-transformers) package.
For this purpose, we translated the [MSMARCO-Passage-Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) dataset using the [fairseq-wmt19-en-de](https://github.com/pytorch/fairseq/tree/master/examples/wmt19) translation model.
### Model Details
| | Description or Link |
|---|---|
|**Base model** | [```german-nlp-group/electra-base-german-uncased```](https://huggingface.co/german-nlp-group/electra-base-german-uncased) |
|**Finetuning task**| Passage Retrieval / Semantic Search |
|**Source dataset**| [```MSMARCO-Passage-Ranking```](https://github.com/microsoft/MSMARCO-Passage-Ranking) |
|**Translation model**| [```fairseq-wmt19-en-de```](https://github.com/pytorch/fairseq/tree/master/examples/wmt19) |
### Performance
We evaluated our model on the [GermanDPR testset](https://deepset.ai/germanquad) and followed the benchmark framework of [BEIR](https://github.com/UKPLab/beir).
In order to compare our results, we conducted an evaluation on the same test data with BM25 and presented the results in the table below.
We took every paragraph with negative and positive context out of the testset and deduplicated them. The resulting corpus size is 2871 against 1025 queries.
| Model | NDCG@1 | NDCG@5 | NDCG@10 | Recall@1 | Recall@5 | Recall@10 |
|:-------:|:--------:|:--------:|:---------:|:--------:|:----------:|:-----------:|
| BM25 | 0.1463 | 0.3451 | 0.4097 | 0.1463 | 0.5424 | 0.7415 |
| Ours | 0.4624 | 0.6218 | 0.6425 | 0.4624 | 0.7581 | 0.8205 |
### How to Use
With ```sentence-transformers``` package (see [UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers) on GitHub for more details):
```python
from sentence_transformers import SentenceTransformer
bi_model = SentenceTransformer("svalabs/bi-electra-ms-marco-german-uncased")
```
### Semantic Search Example
```python
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
K = 3 # number of top ranks to retrieve
# specify documents and queries
docs = [
"Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie.",
"Der Gepard jagt seine Beute.",
"Wir haben in der Agentur ein neues System für Zeiterfassung.",
"Mein Arzt sagt, dass mir dabei eher ein Orthopäde helfen könnte.",
"Einen Impftermin kann mir der Arzt momentan noch nicht anbieten.",
"Auf Kreta hat meine Tochter mit Muscheln eine schöne Sandburg gebaut.",
"Das historische Zentrum (centro storico) liegt auf mehr als 100 Inseln in der Lagune von Venedig.",
"Um in Zukunft sein Vermögen zu schützen, sollte man andere Investmentstrategien in Betracht ziehen.",
"Die Ära der Dinosaurier wurde vermutlich durch den Einschlag eines gigantischen Meteoriten auf der Erde beendet.",
"Bei ALDI sind die Bananen gerade im Angebot.",
"Die Entstehung der Erde ist 4,5 milliarden jahre her.",
"Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben.",
"DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main.",
]
queries = [
"dax steigt",
"dax sinkt",
"probleme mit knieschmerzen",
"software für urlaubsstunden",
"raubtier auf der jagd",
"alter der erde",
"wie alt ist unser planet?",
"wie kapital sichern",
"supermarkt lebensmittel reduziert",
"wodurch ist der tyrannosaurus aussgestorben",
"serien streamen"
]
# encode documents and queries
features_docs = bi_model.encode(docs)
features_queries = bi_model.encode(queries)
# compute pairwise cosine similarity scores
sim = cosine_similarity(features_queries, features_docs)
# print results
for i, query in enumerate(queries):
ranks = np.argsort(-sim[i])
print("Query:", query)
for j, r in enumerate(ranks[:K]):
print(f"[{j}: {sim[i, r]: .3f}]", docs[r])
print("-"*96)
```
**Console Output**:
```
Query: dax steigt
[0: 0.811] Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben.
[1: 0.719] DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main.
[2: 0.218] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie.
------------------------------------------------------------------------------------------------
Query: dax sinkt
[0: 0.815] DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main.
[1: 0.719] Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben.
[2: 0.243] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie.
------------------------------------------------------------------------------------------------
Query: probleme mit knieschmerzen
[0: 0.237] Mein Arzt sagt, dass mir dabei eher ein Orthopäde helfen könnte.
[1: 0.209] Das historische Zentrum (centro storico) liegt auf mehr als 100 Inseln in der Lagune von Venedig.
[2: 0.182] DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main.
------------------------------------------------------------------------------------------------
Query: software für urlaubsstunden
[0: 0.478] Wir haben in der Agentur ein neues System für Zeiterfassung.
[1: 0.208] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie.
[2: 0.190] Bei ALDI sind die Bananen gerade im Angebot.
------------------------------------------------------------------------------------------------
Query: raubtier auf der jagd
[0: 0.599] Der Gepard jagt seine Beute.
[1: 0.264] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie.
[2: 0.159] Auf Kreta hat meine Tochter mit Muscheln eine schöne Sandburg gebaut.
------------------------------------------------------------------------------------------------
Query: alter der erde
[0: 0.705] Die Entstehung der Erde ist 4,5 milliarden jahre her.
[1: 0.413] Die Ära der Dinosaurier wurde vermutlich durch den Einschlag eines gigantischen Meteoriten auf der Erde beendet.
[2: 0.262] Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben.
------------------------------------------------------------------------------------------------
Query: wie alt ist unser planet?
[0: 0.441] Die Entstehung der Erde ist 4,5 milliarden jahre her.
[1: 0.335] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie.
[2: 0.302] Die Ära der Dinosaurier wurde vermutlich durch den Einschlag eines gigantischen Meteoriten auf der Erde beendet.
------------------------------------------------------------------------------------------------
Query: wie kapital sichern
[0: 0.547] Um in Zukunft sein Vermögen zu schützen, sollte man andere Investmentstrategien in Betracht ziehen.
[1: 0.331] Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben.
[2: 0.143] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie.
------------------------------------------------------------------------------------------------
Query: supermarkt lebensmittel reduziert
[0: 0.455] Bei ALDI sind die Bananen gerade im Angebot.
[1: 0.362] DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main.
[2: 0.345] Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben.
------------------------------------------------------------------------------------------------
Query: wodurch ist der tyrannosaurus aussgestorben
[0: 0.457] Die Ära der Dinosaurier wurde vermutlich durch den Einschlag eines gigantischen Meteoriten auf der Erde beendet.
[1: 0.216] Der Gepard jagt seine Beute.
[2: 0.195] Die Entstehung der Erde ist 4,5 milliarden jahre her.
------------------------------------------------------------------------------------------------
Query: serien streamen
[0: 0.570] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie.
[1: 0.361] Wir haben in der Agentur ein neues System für Zeiterfassung.
[2: 0.282] Bei ALDI sind die Bananen gerade im Angebot.
------------------------------------------------------------------------------------------------
```
### Contact
- Baran Avinc, [email protected]
- Jonas Grebe, [email protected]
- Lisa Stolz, [email protected]
- Bonian Riebe, [email protected]
### References
- N. Reimers and I. Gurevych (2019), ['Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks'](https://arxiv.org/abs/1908.10084).
- Payal Bajaj et al. (2018), ['MS MARCO: A Human Generated MAchine Reading COmprehension Dataset'](https://arxiv.org/abs/1611.09268).
- N. Thakur et al. (2021), ['BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models'](https://arxiv.org/abs/2104.08663).
- T. Möller, J. Risch and M. Pietsch (2021), ['GermanQuAD and GermanDPR: Improving Non-English Question Answering and Passage Retrieval'](https://arxiv.org/abs/2104.12741).
|
allenai/PRIMERA-wcep | 8d70caf941f521ae3238d05277812b9a38a2d591 | 2022-06-25T16:04:32.000Z | [
"pytorch",
"tf",
"led",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | allenai | null | allenai/PRIMERA-wcep | 187 | 1 | transformers | 3,708 | ---
license: apache-2.0
---
HF-version model for PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization (ACL 2022).
The original code can be found [here](https://github.com/allenai/PRIMER). You can find the script and notebook to train/evaluate the model in the original github repo.
* Note: due to the difference between the implementations of the original Longformer and the Huggingface LED model, the results of converted models are slightly different. We run a sanity check on both fine-tuned and non fine-tuned models on the **Multinews dataset**, and show the results below:
| Model | Rouge-1 | Rouge-2 | Rouge-L |
| --- | ----------- |----------- |----------- |
| PRIMERA | 42.0 | 13.6 | 20.8|
| PRIMERA-hf | 41.7 |13.6 | 20.5|
| PRIMERA(finetuned) | 49.9 | 21.1 | 25.9|
| PRIMERA-hf(finetuned) | 49.9 | 20.9 | 25.8|
You can use it by
```
from transformers import (
AutoTokenizer,
LEDConfig,
LEDForConditionalGeneration,
)
tokenizer = AutoTokenizer.from_pretrained('allenai/PRIMERA')
config=LEDConfig.from_pretrained('allenai/PRIMERA')
model = LEDForConditionalGeneration.from_pretrained('allenai/PRIMERA')
``` |
Helsinki-NLP/opus-mt-ar-fr | 0ef20fb1109cc990302c5a660af876dc2874e862 | 2021-09-09T21:26:19.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ar",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ar-fr | 186 | null | transformers | 3,709 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ar-fr
* source languages: ar
* target languages: fr
* OPUS readme: [ar-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ar-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/ar-fr/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-fr/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-fr/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ar.fr | 43.5 | 0.602 |
|
tugstugi/bert-base-mongolian-cased | f07c2d5cb25c1fc6baac69a875e8e1bbd040872a | 2021-05-20T08:12:07.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"mn",
"arxiv:1810.04805",
"transformers",
"mongolian",
"cased",
"autotrain_compatible"
] | fill-mask | false | tugstugi | null | tugstugi/bert-base-mongolian-cased | 186 | null | transformers | 3,710 | ---
language: "mn"
tags:
- bert
- mongolian
- cased
---
# BERT-BASE-MONGOLIAN-CASED
[Link to Official Mongolian-BERT repo](https://github.com/tugstugi/mongolian-bert)
## Model description
This repository contains pre-trained Mongolian [BERT](https://arxiv.org/abs/1810.04805) models trained by [tugstugi](https://github.com/tugstugi), [enod](https://github.com/enod) and [sharavsambuu](https://github.com/sharavsambuu).
Special thanks to [nabar](https://github.com/nabar) who provided 5x TPUs.
This repository is based on the following open source projects: [google-research/bert](https://github.com/google-research/bert/),
[huggingface/pytorch-pretrained-BERT](https://github.com/huggingface/pytorch-pretrained-BERT) and [yoheikikuta/bert-japanese](https://github.com/yoheikikuta/bert-japanese).
#### How to use
```python
from transformers import pipeline, AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('tugstugi/bert-base-mongolian-cased', use_fast=False)
model = AutoModelForMaskedLM.from_pretrained('tugstugi/bert-base-mongolian-cased')
## declare task ##
pipe = pipeline(task="fill-mask", model=model, tokenizer=tokenizer)
## example ##
input_ = '[MASK] хот Монгол улсын нийслэл.'
output_ = pipe(input_)
for i in range(len(output_)):
print(output_[i])
## output ##
# {'sequence': 'Улаанбаатар хот Монгол улсын нийслэл.', 'score': 0.826970100402832, 'token': 281, 'token_str': 'Улаанбаатар'}
# {'sequence': 'Нийслэл хот Монгол улсын нийслэл.', 'score': 0.06551621109247208, 'token': 4059, 'token_str': 'Нийслэл'}
# {'sequence': 'Эрдэнэт хот Монгол улсын нийслэл.', 'score': 0.0264141745865345, 'token': 2229, 'token_str': 'Эрдэнэт'}
# {'sequence': 'Дархан хот Монгол улсын нийслэл.', 'score': 0.017083868384361267, 'token': 1646, 'token_str': 'Дархан'}
# {'sequence': 'УБ хот Монгол улсын нийслэл.', 'score': 0.010854342952370644, 'token': 7389, 'token_str': 'УБ'}
```
## Training data
Mongolian Wikipedia and the 700 million word Mongolian news data set [[Pretraining Procedure](https://github.com/tugstugi/mongolian-bert#pre-training)]
### BibTeX entry and citation info
```bibtex
@misc{mongolian-bert,
author = {Tuguldur, Erdene-Ochir and Gunchinish, Sharavsambuu and Bataa, Enkhbold},
title = {BERT Pretrained Models on Mongolian Datasets},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tugstugi/mongolian-bert/}}
}
```
|
xiaoheiqaq/DialoGPT-mediumJojo | 5b0d2080dc7aff57fb88bbcbeac4db2ca616cb16 | 2021-09-24T14:51:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | xiaoheiqaq | null | xiaoheiqaq/DialoGPT-mediumJojo | 186 | null | transformers | 3,711 | ---
tags:
- conversational
---
# Joseph Joestar DialoGPT Model |
voidful/mhubert-base | f6b161b74babb312406f85d63c9982285a441f06 | 2022-06-22T08:18:17.000Z | [
"pytorch",
"hubert",
"feature-extraction",
"transformers"
] | feature-extraction | false | voidful | null | voidful/mhubert-base | 186 | null | transformers | 3,712 | # mhubert-base
* the checkpoint converted from [textless s2st real data](https://github.com/facebookresearch/fairseq/blob/b5a039c292facba9c73f59ff34621ec131d82341/examples/speech_to_speech/docs/textless_s2st_real_data.md)
## usage:
```
asrp==0.0.35 # extracted from fairseq repo
```
```python=
# https://huggingface.co/voidful/mhubert-base/resolve/main/mhubert_base_vp_en_es_fr_it3_L11_km1000.bin
# https://keithito.com/LJ-Speech-Dataset/LJ037-0171.wav
import asrp
hc = asrp.HubertCode("voidful/mhubert-base", './mhubert_base_vp_en_es_fr_it3_L11_km1000.bin', 11)
code = hc('./LJ037-0171.wav')['code']
```
result:
```
array([991, 393, 946, 215, 215, 327, 487, 487, 219, 219, 522, 522, 975,
975, 975, 975, 668, 576, 576, 384, 761, 907, 430, 748, 12, 12,
977, 877, 179, 961, 428, 428, 822, 89, 194, 194, 664, 817, 817,
146, 146, 146, 283, 283, 352, 352, 428, 428, 812, 523, 143, 105,
105, 244, 244, 583, 583, 576, 384, 879, 32, 170, 683, 731, 600,
600, 702, 15, 59, 754, 872, 324, 789, 789, 402, 908, 380, 211,
179, 961, 207, 950, 321, 113, 327, 327, 932, 148, 148, 202, 393,
946, 215, 215, 406, 406, 423, 423, 6, 384, 879, 879, 219, 219,
522, 522, 589, 589, 337, 126, 126, 126, 323, 740, 663, 663, 969,
969, 969, 506, 506, 506, 545, 545, 85, 85, 297, 297, 265, 675,
237, 237, 307, 407, 407, 499, 407, 334, 334, 334, 111, 666, 666,
277, 128, 665, 644, 644, 389, 771, 46, 46, 179, 961, 931, 428,
822, 822, 89, 194, 194, 664, 765, 765, 302, 302, 205, 205, 521,
521, 29, 29, 537, 393, 393, 946, 734, 263, 45, 914, 445, 469,
469, 469, 482, 972, 972, 972, 972, 333, 333, 817, 817, 817, 146,
146, 146, 283, 88, 352, 352, 915, 143, 79, 79, 868, 868, 220,
220, 870, 45, 272, 313, 313, 367, 367, 729, 729, 409, 409, 409,
45, 468, 468, 468, 468, 468, 468, 468, 468, 340, 340, 340, 340,
340, 340, 340, 340, 380, 660, 555, 555, 208, 417, 942, 605, 193,
121, 407, 704, 704, 704, 704, 334, 499, 226, 226, 621, 128, 665,
665, 991, 991, 459, 459, 459, 173, 945, 945, 945, 233, 233, 479,
479, 479, 479, 330, 776, 776, 655, 655, 655, 837, 837, 81, 81,
664, 429, 148, 431, 431, 531, 531, 531, 531, 531, 668, 167, 104,
104, 104, 70, 70, 185, 686, 85, 85, 85, 297, 243, 243, 172,
172, 871, 877, 89, 194, 664, 470, 470, 152, 152, 152, 429, 429,
429, 429, 290, 943, 943, 943, 484, 488, 620, 352, 915, 143, 38,
479, 479, 479, 479, 330, 330, 776, 167, 655, 655, 655, 837, 837,
81, 81, 81, 284, 284, 377, 377, 663, 969, 969, 969, 555, 555,
208, 433, 755, 942, 942, 605, 193, 121, 121, 121, 704, 704, 334])
```
## Eval
```python=
# https://dl.fbaipublicfiles.com/fairseq/speech_to_speech/vocoder/code_hifigan/mhubert_vp_en_es_fr_it3_400k_layer11_km1000_lj/g_00500000
import asrp
hc = Code2Speech('./g_00500000', vocoder='hifigan', end_tok=999, code_begin_pad=0)
# play on notebook
import IPython.display as ipd
ipd.Audio(data=hc(code), autoplay=False, rate=16000)
```
|
dominguesm/bert-restore-punctuation-ptbr | b02fa886fceb570b968e122cd425025ef4f59bce | 2022-07-14T16:01:58.000Z | [
"pytorch",
"bert",
"token-classification",
"pt",
"dataset:wiki_lingua",
"transformers",
"named-entity-recognition",
"Transformer",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | dominguesm | null | dominguesm/bert-restore-punctuation-ptbr | 186 | 1 | transformers | 3,713 | ---
language:
- pt
license: cc-by-4.0
datasets:
- wiki_lingua
thumbnail: null
tags:
- named-entity-recognition
- Transformer
- pytorch
- bert
metrics:
- f1
- precision
- recall
model-index:
- name: rpunct-ptbr
results:
- task:
type: named-entity-recognition
dataset:
type: wiki_lingua
name: wiki_lingua
metrics:
- type: f1
value: 55.70
name: F1 Score
- type: precision
value: 57.72
name: Precision
- type: recall
value: 53.83
name: Recall
widget:
- text: "henrique foi no lago pescar com o pedro mais tarde foram para a casa do pedro fritar os peixes"
- text: "cinco trabalhadores da construção civil em capacetes e coletes amarelos estão ocupados no trabalho"
- text: "na quinta feira em visita a belo horizonte pedro sobrevoa a cidade atingida pelas chuvas"
- text: "coube ao representante de classe contar que na avaliação de língua portuguesa alguns alunos se mantiveram concentrados e outros dispersos"
---
# 🤗 bert-restore-punctuation-ptbr
* 🪄 [W&B Dashboard](https://wandb.ai/dominguesm/RestorePunctuationPTBR)
* ⛭ [GitHub](https://github.com/DominguesM/respunct)
This is a [bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) model finetuned for punctuation restoration on [WikiLingua](https://github.com/esdurmus/Wikilingua).
This model is intended for direct use as a punctuation restoration model for the general Portuguese language. Alternatively, you can use this for further fine-tuning on domain-specific texts for punctuation restoration tasks.
Model restores the following punctuations -- **[! ? . , - : ; ' ]**
The model also restores the upper-casing of words.
-----------------------------------------------
## 🤷 Usage
🇧🇷 easy-to-use package to restore punctuation of portuguese texts.
**Below is a quick way to use the template.**
1. First, install the package.
```
pip install respunct
```
2. Sample python code.
``` python
from respunct import RestorePuncts
model = RestorePuncts()
model.restore_puncts("""
henrique foi no lago pescar com o pedro mais tarde foram para a casa do pedro fritar os peixes""")
# output:
# Henrique foi no lago pescar com o Pedro. Mais tarde, foram para a casa do Pedro fritar os peixes.
```
-----------------------------------------------
## 🎯 Accuracy
| label | precision | recall | f1-score | support|
| ------------------------- | -------------|-------- | ----------|--------|
| **Upper - OU** | 0.89 | 0.91 | 0.90 | 69376
| **None - OO** | 0.99 | 0.98 | 0.98 | 857659
| **Full stop/period - .O** | 0.86 | 0.93 | 0.89 | 60410
| **Comma - ,O** | 0.85 | 0.83 | 0.84 | 48608
| **Upper + Comma - ,U** | 0.73 | 0.76 | 0.75 | 3521
| **Question - ?O** | 0.68 | 0.78 | 0.73 | 1168
| **Upper + period - .U** | 0.66 | 0.72 | 0.69 | 1884
| **Upper + colon - :U** | 0.59 | 0.63 | 0.61 | 352
| **Colon - :O** | 0.70 | 0.53 | 0.60 | 2420
| **Question Mark - ?U** | 0.50 | 0.56 | 0.53 | 36
| **Upper + Exclam. - !U** | 0.38 | 0.32 | 0.34 | 38
| **Exclamation Mark - !O** | 0.30 | 0.05 | 0.08 | 783
| **Semicolon - ;O** | 0.35 | 0.04 | 0.08 | 1557
| **Apostrophe - 'O** | 0.00 | 0.00 | 0.00 | 3
| **Hyphen - -O** | 0.00 | 0.00 | 0.00 | 3
| | | | |
| **accuracy** | | | 0.96 | 1047818
| **macro avg** | 0.57 | 0.54 | 0.54 | 1047818
| **weighted avg** | 0.96 | 0.96 | 0.96 | 1047818
-----------------------------------------------
## 🤙 Contact
[Maicon Domingues]([email protected]) for questions, feedback and/or requests for similar models.
|
nickprock/distilbert-base-uncased-banking77-classification | 4a2b896807442154741d09d2edba1a3857fa0d4e | 2022-07-21T12:44:23.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:banking77",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | nickprock | null | nickprock/distilbert-base-uncased-banking77-classification | 186 | null | transformers | 3,714 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- banking77
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-banking77-classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: banking77
type: banking77
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924025974025974
- task:
type: text-classification
name: Text Classification
dataset:
name: banking77
type: banking77
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.924025974025974
verified: true
- name: Precision Macro
type: precision
value: 0.9278003086307286
verified: true
- name: Precision Micro
type: precision
value: 0.924025974025974
verified: true
- name: Precision Weighted
type: precision
value: 0.9278003086307287
verified: true
- name: Recall Macro
type: recall
value: 0.9240259740259743
verified: true
- name: Recall Micro
type: recall
value: 0.924025974025974
verified: true
- name: Recall Weighted
type: recall
value: 0.924025974025974
verified: true
- name: F1 Macro
type: f1
value: 0.9243068139192414
verified: true
- name: F1 Micro
type: f1
value: 0.924025974025974
verified: true
- name: F1 Weighted
type: f1
value: 0.9243068139192416
verified: true
- name: loss
type: loss
value: 0.31516405940055847
verified: true
widget:
- text: 'Can I track the card you sent to me? '
example_title: Card Arrival Example
- text: Can you explain your exchange rate policy to me?
example_title: Exchange Rate Example
- text: I can't pay by my credit card
example_title: Card Not Working Example
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-banking77-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the banking77 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3152
- Accuracy: 0.9240
- F1 Score: 0.9243
## Model description
This is my first fine-tuning experiment using Hugging Face.
Using distilBERT as a pretrained model, I trained a classifier for online banking queries.
It could be useful for addressing tickets.
## Intended uses & limitations
The model can be used on text classification. In particular is fine tuned on banking domain.
## Training and evaluation data
The dataset used is [banking77](https://huggingface.co/datasets/banking77)
The 77 labels are:
|label|intent|
|:---:|:----:|
|0|activate_my_card|
|1|age_limit|
|2|apple_pay_or_google_pay|
|3|atm_support|
|4|automatic_top_up|
|5|balance_not_updated_after_bank_transfer|
|6|balance_not_updated_after_cheque_or_cash_deposit|
|7|beneficiary_not_allowed|
|8|cancel_transfer|
|9|card_about_to_expire|
|10|card_acceptance|
|11|card_arrival|
|12|card_delivery_estimate|
|13|card_linking|
|14|card_not_working|
|15|card_payment_fee_charged|
|16|card_payment_not_recognised|
|17|card_payment_wrong_exchange_rate|
|18|card_swallowed|
|19|cash_withdrawal_charge|
|20|cash_withdrawal_not_recognised|
|21|change_pin|
|22|compromised_card|
|23|contactless_not_working|
|24|country_support|
|25|declined_card_payment|
|26|declined_cash_withdrawal|
|27|declined_transfer|
|28|direct_debit_payment_not_recognised|
|29|disposable_card_limits|
|30|edit_personal_details|
|31|exchange_charge|
|32|exchange_rate|
|33|exchange_via_app|
|34|extra_charge_on_statement|
|35|failed_transfer|
|36|fiat_currency_support|
|37|get_disposable_virtual_card|
|38|get_physical_card|
|39|getting_spare_card|
|40|getting_virtual_card|
|41|lost_or_stolen_card|
|42|lost_or_stolen_phone|
|43|order_physical_card|
|44|passcode_forgotten|
|45|pending_card_payment|
|46|pending_cash_withdrawal|
|47|pending_top_up|
|48|pending_transfer|
|49|pin_blocked|
|50|receiving_money|
|51|Refund_not_showing_up|
|52|request_refund|
|53|reverted_card_payment?|
|54|supported_cards_and_currencies|
|55|terminate_account|
|56|top_up_by_bank_transfer_charge|
|57|top_up_by_card_charge|
|58|top_up_by_cash_or_cheque|
|59|top_up_failed|
|60|top_up_limits|
|61|top_up_reverted|
|62|topping_up_by_card|
|63|transaction_charged_twice|
|64|transfer_fee_charged|
|65|transfer_into_account|
|66|transfer_not_received_by_recipient|
|67|transfer_timing|
|68|unable_to_verify_identity|
|69|verify_my_identity|
|70|verify_source_of_funds|
|71|verify_top_up|
|72|virtual_card_not_working|
|73|visa_or_mastercard|
|74|why_verify_identity|
|75|wrong_amount_of_cash_received|
|76|wrong_exchange_rate_for_cash_withdrawal|
## Training procedure
```
from transformers import pipeline
pipe = pipeline("text-classification", model="nickprock/distilbert-base-uncased-banking77-classification")
pipe("I can't pay by my credit card")
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 3.8732 | 1.0 | 157 | 3.1476 | 0.5370 | 0.4881 |
| 2.5598 | 2.0 | 314 | 1.9780 | 0.6916 | 0.6585 |
| 1.5863 | 3.0 | 471 | 1.2239 | 0.8042 | 0.7864 |
| 0.9829 | 4.0 | 628 | 0.8067 | 0.8565 | 0.8487 |
| 0.6274 | 5.0 | 785 | 0.5837 | 0.8799 | 0.8752 |
| 0.4304 | 6.0 | 942 | 0.4630 | 0.9042 | 0.9040 |
| 0.3106 | 7.0 | 1099 | 0.3982 | 0.9088 | 0.9087 |
| 0.2238 | 8.0 | 1256 | 0.3587 | 0.9110 | 0.9113 |
| 0.1708 | 9.0 | 1413 | 0.3351 | 0.9208 | 0.9208 |
| 0.1256 | 10.0 | 1570 | 0.3242 | 0.9179 | 0.9182 |
| 0.0981 | 11.0 | 1727 | 0.3136 | 0.9211 | 0.9214 |
| 0.0745 | 12.0 | 1884 | 0.3151 | 0.9211 | 0.9213 |
| 0.0601 | 13.0 | 2041 | 0.3089 | 0.9218 | 0.9220 |
| 0.0482 | 14.0 | 2198 | 0.3158 | 0.9214 | 0.9216 |
| 0.0402 | 15.0 | 2355 | 0.3126 | 0.9224 | 0.9226 |
| 0.0344 | 16.0 | 2512 | 0.3143 | 0.9231 | 0.9233 |
| 0.0298 | 17.0 | 2669 | 0.3156 | 0.9231 | 0.9233 |
| 0.0272 | 18.0 | 2826 | 0.3134 | 0.9244 | 0.9247 |
| 0.0237 | 19.0 | 2983 | 0.3156 | 0.9244 | 0.9246 |
| 0.0229 | 20.0 | 3140 | 0.3152 | 0.9240 | 0.9243 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Helsinki-NLP/opus-mt-fr-ar | dc918d744b14d4a3f9661092baf0c2133acc1b4b | 2021-01-18T08:41:25.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"ar",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-ar | 185 | null | transformers | 3,715 | ---
language:
- fr
- ar
tags:
- translation
license: apache-2.0
---
### fra-ara
* source group: French
* target group: Arabic
* OPUS readme: [fra-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-ara/README.md)
* model: transformer
* source language(s): fra
* target language(s): apc ara arq arq_Latn ary arz
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.ara | 14.4 | 0.439 |
### System Info:
- hf_name: fra-ara
- source_languages: fra
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'ar']
- src_constituents: {'fra'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ara/opus-2020-07-03.test.txt
- src_alpha3: fra
- tgt_alpha3: ara
- short_pair: fr-ar
- chrF2_score: 0.439
- bleu: 14.4
- brevity_penalty: 1.0
- ref_len: 7956.0
- src_name: French
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: fr
- tgt_alpha2: ar
- prefer_old: False
- long_pair: fra-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Musixmatch/umberto-wikipedia-uncased-v1 | 713d59922ccb4b5fc31a527ce2d785c23533363b | 2021-02-10T09:53:35.000Z | [
"pytorch",
"camembert",
"fill-mask",
"it",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Musixmatch | null | Musixmatch/umberto-wikipedia-uncased-v1 | 185 | 1 | transformers | 3,716 | ---
language: it
---
# UmBERTo Wikipedia Uncased
[UmBERTo](https://github.com/musixmatchresearch/umberto) is a Roberta-based Language Model trained on large Italian Corpora and uses two innovative approaches: SentencePiece and Whole Word Masking. Now available at [github.com/huggingface/transformers](https://huggingface.co/Musixmatch/umberto-commoncrawl-cased-v1)
<p align="center">
<img src="https://user-images.githubusercontent.com/7140210/72913702-d55a8480-3d3d-11ea-99fc-f2ef29af4e72.jpg" width="700"> </br>
Marco Lodola, Monument to Umberto Eco, Alessandria 2019
</p>
## Dataset
UmBERTo-Wikipedia-Uncased Training is trained on a relative small corpus (~7GB) extracted from [Wikipedia-ITA](https://linguatools.org/tools/corpora/wikipedia-monolingual-corpora/).
## Pre-trained model
| Model | WWM | Cased | Tokenizer | Vocab Size | Train Steps | Download |
| ------ | ------ | ------ | ------ | ------ |------ | ------ |
| `umberto-wikipedia-uncased-v1` | YES | YES | SPM | 32K | 100k | [Link](http://bit.ly/35wbSj6) |
This model was trained with [SentencePiece](https://github.com/google/sentencepiece) and Whole Word Masking.
## Downstream Tasks
These results refers to umberto-wikipedia-uncased model. All details are at [Umberto](https://github.com/musixmatchresearch/umberto) Official Page.
#### Named Entity Recognition (NER)
| Dataset | F1 | Precision | Recall | Accuracy |
| ------ | ------ | ------ | ------ | ----- |
| **ICAB-EvalITA07** | **86.240** | 85.939 | 86.544 | 98.534 |
| **WikiNER-ITA** | **90.483** | 90.328 | 90.638 | 98.661 |
#### Part of Speech (POS)
| Dataset | F1 | Precision | Recall | Accuracy |
| ------ | ------ | ------ | ------ | ------ |
| **UD_Italian-ISDT** | 98.563 | 98.508 | 98.618 | **98.717** |
| **UD_Italian-ParTUT** | 97.810 | 97.835 | 97.784 | **98.060** |
## Usage
##### Load UmBERTo Wikipedia Uncased with AutoModel, Autotokenizer:
```python
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Musixmatch/umberto-wikipedia-uncased-v1")
umberto = AutoModel.from_pretrained("Musixmatch/umberto-wikipedia-uncased-v1")
encoded_input = tokenizer.encode("Umberto Eco è stato un grande scrittore")
input_ids = torch.tensor(encoded_input).unsqueeze(0) # Batch size 1
outputs = umberto(input_ids)
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output
```
##### Predict masked token:
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="Musixmatch/umberto-wikipedia-uncased-v1",
tokenizer="Musixmatch/umberto-wikipedia-uncased-v1"
)
result = fill_mask("Umberto Eco è <mask> un grande scrittore")
# {'sequence': '<s> umberto eco è stato un grande scrittore</s>', 'score': 0.5784581303596497, 'token': 361}
# {'sequence': '<s> umberto eco è anche un grande scrittore</s>', 'score': 0.33813193440437317, 'token': 269}
# {'sequence': '<s> umberto eco è considerato un grande scrittore</s>', 'score': 0.027196012437343597, 'token': 3236}
# {'sequence': '<s> umberto eco è diventato un grande scrittore</s>', 'score': 0.013716378249228, 'token': 5742}
# {'sequence': '<s> umberto eco è inoltre un grande scrittore</s>', 'score': 0.010662357322871685, 'token': 1030}
```
## Citation
All of the original datasets are publicly available or were released with the owners' grant. The datasets are all released under a CC0 or CCBY license.
* UD Italian-ISDT Dataset [Github](https://github.com/UniversalDependencies/UD_Italian-ISDT)
* UD Italian-ParTUT Dataset [Github](https://github.com/UniversalDependencies/UD_Italian-ParTUT)
* I-CAB (Italian Content Annotation Bank), EvalITA [Page](http://www.evalita.it/)
* WIKINER [Page](https://figshare.com/articles/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500) , [Paper](https://www.sciencedirect.com/science/article/pii/S0004370212000276?via%3Dihub)
```
@inproceedings {magnini2006annotazione,
title = {Annotazione di contenuti concettuali in un corpus italiano: I - CAB},
author = {Magnini,Bernardo and Cappelli,Amedeo and Pianta,Emanuele and Speranza,Manuela and Bartalesi Lenzi,V and Sprugnoli,Rachele and Romano,Lorenza and Girardi,Christian and Negri,Matteo},
booktitle = {Proc.of SILFI 2006},
year = {2006}
}
@inproceedings {magnini2006cab,
title = {I - CAB: the Italian Content Annotation Bank.},
author = {Magnini,Bernardo and Pianta,Emanuele and Girardi,Christian and Negri,Matteo and Romano,Lorenza and Speranza,Manuela and Lenzi,Valentina Bartalesi and Sprugnoli,Rachele},
booktitle = {LREC},
pages = {963--968},
year = {2006},
organization = {Citeseer}
}
```
## Authors
**Loreto Parisi**: `loreto at musixmatch dot com`, [loretoparisi](https://github.com/loretoparisi)
**Simone Francia**: `simone.francia at musixmatch dot com`, [simonefrancia](https://github.com/simonefrancia)
**Paolo Magnani**: `paul.magnani95 at gmail dot com`, [paulthemagno](https://github.com/paulthemagno)
## About Musixmatch AI

We do Machine Learning and Artificial Intelligence @[musixmatch](https://twitter.com/Musixmatch)
Follow us on [Twitter](https://twitter.com/musixmatchai) [Github](https://github.com/musixmatchresearch)
|
malduwais/distilbert-base-uncased-finetuned-ner | 66ffbe9687f004a1461fce7fd67cbf1972b91837 | 2021-11-28T09:59:58.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | malduwais | null | malduwais/distilbert-base-uncased-finetuned-ner | 185 | null | transformers | 3,717 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9244616234124793
- name: Recall
type: recall
value: 0.9364582168027744
- name: F1
type: f1
value: 0.9304212515282871
- name: Accuracy
type: accuracy
value: 0.9833987322668276
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0623
- Precision: 0.9245
- Recall: 0.9365
- F1: 0.9304
- Accuracy: 0.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2377 | 1.0 | 878 | 0.0711 | 0.9176 | 0.9254 | 0.9215 | 0.9813 |
| 0.0514 | 2.0 | 1756 | 0.0637 | 0.9213 | 0.9346 | 0.9279 | 0.9831 |
| 0.031 | 3.0 | 2634 | 0.0623 | 0.9245 | 0.9365 | 0.9304 | 0.9834 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
sayef/fsner-bert-base-uncased | aea2529c44e8f7ab440f5d8ece7c48c7293e00bf | 2022-03-29T14:20:35.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2008.10570",
"transformers"
] | feature-extraction | false | sayef | null | sayef/fsner-bert-base-uncased | 185 | 5 | transformers | 3,718 | # FSNER
Implemented by [sayef](https://huggingface.co/sayef).
# Overview
The FSNER model was proposed in [Example-Based Named Entity Recognition](https://arxiv.org/abs/2008.10570) by Morteza
Ziyadi, Yuting Sun, Abhishek Goswami, Jade Huang, Weizhu Chen. To identify entity spans in a new domain, it uses a
train-free few-shot learning approach inspired by question-answering.
## Abstract
> We present a novel approach to named entity recognition (NER) in the presence of scarce data that we call example-based NER. Our train-free few-shot learning approach takes inspiration from question-answering to identify entity spans in a new and unseen domain. In comparison with the current state-of-the-art, the proposed method performs significantly better, especially when using a low number of support examples.
## Model Training Details
| identifier | epochs | datasets |
| ---------- |:------:|:-----------------------------------------------------------------------------------------------:|
| [sayef/fsner-bert-base-uncased](https://huggingface.co/sayef/fsner-bert-base-uncased) | 25 | ontonotes5, conll2003, wnut2017, mit_movie_trivia, mit_restaurant and fin (Alvarado et al.). |
## Installation and Example Usage
You can use the FSNER model in 3 ways:
1. Install directly from PyPI: `pip install fsner` and import the model as shown in the code example below
or
2. Install from source: `python install .` and import the model as shown in the code example below
or
3. Clone [repo](https://github.com/sayef/fsner) and add absolute path of `fsner/src` directory to your PYTHONPATH and
import the model as shown in the code example below
```python
import json
from fsner import FSNERModel, FSNERTokenizerUtils, pretty_embed
query_texts = [
"Does Luke's serve lunch?",
"Chang does not speak Taiwanese very well.",
"I like Berlin."
]
# Each list in supports are the examples of one entity type
# Wrap entities around with [E] and [/E] in the examples.
# Each sentence should have only one pair of [E] ... [/E]
support_texts = {
"Restaurant": [
"What time does [E] Subway [/E] open for breakfast?",
"Is there a [E] China Garden [/E] restaurant in newark?",
"Does [E] Le Cirque [/E] have valet parking?",
"Is there a [E] McDonalds [/E] on main street?",
"Does [E] Mike's Diner [/E] offer huge portions and outdoor dining?"
],
"Language": [
"Although I understood no [E] French [/E] in those days , I was prepared to spend the whole day with Chien - chien .",
"like what the hell 's that called in [E] English [/E] ? I have to register to be here like since I 'm a foreigner .",
"So , I 'm also working on an [E] English [/E] degree because that 's my real interest .",
"Al - Jazeera TV station , established in November 1996 in Qatar , is an [E] Arabic - language [/E] news TV station broadcasting global news and reports nonstop around the clock .",
"They think it 's far better for their children to be here improving their [E] English [/E] than sitting at home in front of a TV . \"",
"The only solution seemed to be to have her learn [E] French [/E] .",
"I have to read sixty pages of [E] Russian [/E] today ."
]
}
device = 'cpu'
tokenizer = FSNERTokenizerUtils("sayef/fsner-bert-base-uncased")
queries = tokenizer.tokenize(query_texts).to(device)
supports = tokenizer.tokenize(list(support_texts.values())).to(device)
model = FSNERModel("sayef/fsner-bert-base-uncased")
model.to(device)
p_starts, p_ends = model.predict(queries, supports)
# One can prepare supports once and reuse multiple times with different queries
# ------------------------------------------------------------------------------
# start_token_embeddings, end_token_embeddings = model.prepare_supports(supports)
# p_starts, p_ends = model.predict(queries, start_token_embeddings=start_token_embeddings,
# end_token_embeddings=end_token_embeddings)
output = tokenizer.extract_entity_from_scores(query_texts, queries, p_starts, p_ends,
entity_keys=list(support_texts.keys()), thresh=0.50)
print(json.dumps(output, indent=2))
# install displacy for pretty embed
pretty_embed(query_texts, output, list(support_texts.keys()))
```
<!DOCTYPE html>
<html lang="en">
<head>
<title>displaCy</title>
</head>
<body style="font-size: 16px; font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', 'Segoe UI Symbol'; padding: 4rem 2rem; direction: ltr">
<figure style="margin-bottom: 6rem">
<div class="entities" style="line-height: 2.5; direction: ltr">
<div class="entities" style="line-height: 2.5; direction: ltr">Does
<mark class="entity" style="background: #7aecec; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Luke's
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">Restaurant</span>
</mark>
serve lunch?</div>
<div class="entities" style="line-height: 2.5; direction: ltr">Chang does not speak
<mark class="entity" style="background: #bfeeb7; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Taiwanese
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">Language</span>
</mark>
very well.</div>
<div class="entities" style="line-height: 2.5; direction: ltr">I like Berlin.</div>
</div>
</figure>
</body>
</html>
## Datasets preparation
1. We need to convert dataset into the following format. Let's say we have a dataset file train.json like following.
2. Each list in supports are the examples of one entity type
3. Wrap entities around with [E] and [/E] in the examples.
4. Each example should have only one pair of [E] ... [/E].
```json
{
"CARDINAL_NUMBER": [
"Washington , cloudy , [E] 2 [/E] to 6 degrees .",
"New Dehli , sunny , [E] 6 [/E] to 19 degrees .",
"Well this is number [E] two [/E] .",
"....."
],
"LANGUAGE": [
"They do n't have the Quicken [E] Dutch [/E] version ?",
"they learned a lot of [E] German [/E] .",
"and then [E] Dutch [/E] it 's Mifrau",
"...."
],
"MONEY": [
"Per capita personal income ranged from $ [E] 11,116 [/E] in Mississippi to $ 23,059 in Connecticut ... .",
"The trade surplus was [E] 582 million US dollars [/E] .",
"It settled with a loss of 4.95 cents at $ [E] 1.3210 [/E] a pound .",
"...."
]
}
```
2. Converted ontonotes5 dataset can be found here:
1. [train](https://gist.githubusercontent.com/sayef/46deaf7e6c6e1410b430ddc8aff9c557/raw/ea7ae2ae933bfc9c0daac1aa52a9dc093d5b36f4/ontonotes5.train.json)
2. [dev](https://gist.githubusercontent.com/sayef/46deaf7e6c6e1410b430ddc8aff9c557/raw/ea7ae2ae933bfc9c0daac1aa52a9dc093d5b36f4/ontonotes5.dev.json)
3. Then trainer script can be used to train/evaluate your fsner model.
```bash
fsner trainer --pretrained-model bert-base-uncased --mode train --train-data train.json --val-data val.json \
--train-batch-size 6 --val-batch-size 6 --n-examples-per-entity 10 --neg-example-batch-ratio 1/3 --max-epochs 25 --device gpu \
--gpus -1 --strategy ddp
``` |
sonoisa/t5-base-japanese-mC4-Wikipedia | cbf67abe3b28b3ec4c32bbead1207cf2366a3c7f | 2021-09-23T16:29:58.000Z | [
"pytorch",
"ja",
"dataset:wikipedia",
"dataset:c4",
"transformers",
"t5",
"text2text-generation",
"seq2seq",
"license:cc-by-sa-4.0"
] | text2text-generation | false | sonoisa | null | sonoisa/t5-base-japanese-mC4-Wikipedia | 185 | 1 | transformers | 3,719 | ---
language: ja
tags:
- t5
- text2text-generation
- seq2seq
license: cc-by-sa-4.0
datasets:
- wikipedia
- c4
---
# 日本語T5事前学習済みモデル
This is a T5 (Text-to-Text Transfer Transformer) model pretrained on Japanese corpus.
次の日本語コーパス(約890GB)を用いて事前学習を行ったT5 (Text-to-Text Transfer Transformer) モデルです。
* [Wikipedia](https://ja.wikipedia.org)の日本語ダンプデータ (2020年7月6日時点のもの)
* [mC4](https://github.com/allenai/allennlp/discussions/5056)の日本語コーパス(正確にはc4/multilingualのjaスプリット)
このモデルは事前学習のみを行なったものであり、特定のタスクに利用するにはファインチューニングする必要があります。
本モデルにも、大規模コーパスを用いた言語モデルにつきまとう、学習データの内容の偏りに由来する偏った(倫理的ではなかったり、有害だったり、バイアスがあったりする)出力結果になる問題が潜在的にあります。
この問題が発生しうることを想定した上で、被害が発生しない用途にのみ利用するよう気をつけてください。
# 転移学習のサンプルコード
https://github.com/sonoisa/t5-japanese のモデル名"t5-base-japanese"を"t5-base-japanese-mC4-Wikipedia"に変更してください。
# ベンチマーク
livedoorニュースコーパスを用いたニュース記事のジャンル予測タスクの精度は次の通りです。
mC4/ja + Wikipediaを用いて事前学習した日本語T5 ([t5-base-japanese-mC4-Wikipedia](https://huggingface.co/sonoisa/t5-base-japanese-mC4-Wikipedia), パラメータ数は222M)
| label | precision | recall | f1-score | support |
| ----------- | ----------- | ------- | -------- | ------- |
| 0 | 0.91 | 0.93 | 0.92 | 130 |
| 1 | 0.96 | 0.95 | 0.95 | 121 |
| 2 | 0.96 | 0.96 | 0.96 | 123 |
| 3 | 0.87 | 0.90 | 0.89 | 82 |
| 4 | 0.96 | 0.99 | 0.98 | 129 |
| 5 | 0.97 | 0.96 | 0.97 | 141 |
| 6 | 1.00 | 0.98 | 0.99 | 127 |
| 7 | 1.00 | 0.98 | 0.99 | 127 |
| 8 | 0.98 | 0.97 | 0.97 | 120 |
| accuracy | | | 0.96 | 1100 |
| macro avg | 0.96 | 0.96 | 0.96 | 1100 |
| weighted avg | 0.96 | 0.96 | 0.96 | 1100 |
比較対象: OSCAR + CC-100 + Wikipediaを用いて事前学習した日本語T5 ([t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese), パラメータ数は222M)
| label | precision | recall | f1-score | support |
| ----------- | ----------- | ------- | -------- | ------- |
| 0 | 0.96 | 0.94 | 0.95 | 130 |
| 1 | 0.98 | 0.99 | 0.99 | 121 |
| 2 | 0.96 | 0.96 | 0.96 | 123 |
| 3 | 0.86 | 0.91 | 0.89 | 82 |
| 4 | 0.96 | 0.97 | 0.97 | 129 |
| 5 | 0.96 | 0.96 | 0.96 | 141 |
| 6 | 0.98 | 0.98 | 0.98 | 127 |
| 7 | 1.00 | 0.99 | 1.00 | 127 |
| 8 | 0.99 | 0.97 | 0.98 | 120 |
| accuracy | | | 0.97 | 1100 |
| macro avg | 0.96 | 0.96 | 0.96 | 1100 |
| weighted avg | 0.97 | 0.97 | 0.97 | 1100 |
## 免責事項
本モデルの作者は本モデルを作成するにあたって、その内容、機能等について細心の注意を払っておりますが、モデルの出力が正確であるかどうか、安全なものであるか等について保証をするものではなく、何らの責任を負うものではありません。本モデルの利用により、万一、利用者に何らかの不都合や損害が発生したとしても、モデルやデータセットの作者や作者の所属組織は何らの責任を負うものではありません。利用者には本モデルやデータセットの作者や所属組織が責任を負わないことを明確にする義務があります。
## ライセンス
[CC-BY SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/deed.ja)
[Common Crawlの利用規約](http://commoncrawl.org/terms-of-use/)も守るようご注意ください。
|
speechbrain/asr-wav2vec2-commonvoice-en | a1a126cc07ff5426a71487a89132bd1b70c7155e | 2022-06-05T17:24:43.000Z | [
"wav2vec2",
"feature-extraction",
"en",
"dataset:commonvoice",
"arxiv:2106.04624",
"speechbrain",
"CTC",
"pytorch",
"Transformer",
"license:apache-2.0",
"automatic-speech-recognition"
] | automatic-speech-recognition | false | speechbrain | null | speechbrain/asr-wav2vec2-commonvoice-en | 185 | 4 | speechbrain | 3,720 | ---
language: "en"
thumbnail:
pipeline_tag: automatic-speech-recognition
tags:
- CTC
- pytorch
- speechbrain
- Transformer
license: "apache-2.0"
datasets:
- commonvoice
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# wav2vec 2.0 with CTC trained on CommonVoice English (No LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on CommonVoice (English Language) within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The performance of the model is the following:
| Release | Test WER | GPUs |
|:--------------:|:--------------:| :--------:|
| 03-06-21 | 15.69 | 2xV100 32GB |
## Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions (train.tsv) of CommonVoice (EN).
- Acoustic model (wav2vec2.0 + CTC). A pretrained wav2vec 2.0 model ([wav2vec2-lv60-large](https://huggingface.co/facebook/wav2vec2-large-lv60)) is combined with two DNN layers and finetuned on CommonVoice En.
The obtained final acoustic representation is given to the CTC decoder.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
## Install SpeechBrain
First of all, please install tranformers and SpeechBrain with the following command:
```
pip install speechbrain transformers
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in English)
```python
from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-wav2vec2-commonvoice-en", savedir="pretrained_models/asr-wav2vec2-commonvoice-en")
asr_model.transcribe_file("speechbrain/asr-wav2vec2-commonvoice-en/example.wav")
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
## Parallel Inference on a Batch
Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model.
### Training
The model was trained with SpeechBrain.
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/CommonVoice/ASR/seq2seq
python train.py hparams/train_en_with_wav2vec.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1tjz6IZmVRkuRE97E7h1cXFoGTer7pT73?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-msa | 9685a2a42a5f777ae91768556e7fe1124819b99f | 2021-10-18T09:44:57.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-msa | 184 | null | transformers | 3,721 | ---
language:
- ar
license: apache-2.0
widget:
- text: 'إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع'
---
# CAMeLBERT-CA POS-MSA Model
## Model description
**CAMeLBERT-CA POS-MSA Model** is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the [CAMeLBERT-CA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the [PATB](https://dl.acm.org/doi/pdf/10.5555/1621804.1621808) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA POS-MSA model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-msa')
>>> text = 'إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع'
>>> pos(text)
[{'entity': 'noun', 'score': 0.9999758, 'index': 1, 'word': 'إمارة', 'start': 0, 'end': 5}, {'entity': 'noun_prop', 'score': 0.9997559, 'index': 2, 'word': 'أبوظبي', 'start': 6, 'end': 12}, {'entity': 'pron', 'score': 0.99996257, 'index': 3, 'word': 'هي', 'start': 13, 'end': 15}, {'entity': 'noun', 'score': 0.9958452, 'index': 4, 'word': 'إحدى', 'start': 16, 'end': 20}, {'entity': 'noun', 'score': 0.9999635, 'index': 5, 'word': 'إما', 'start': 21, 'end': 24}, {'entity': 'noun', 'score': 0.99991685, 'index': 6, 'word': '##رات', 'start': 24, 'end': 27}, {'entity': 'noun', 'score': 0.99997497, 'index': 7, 'word': 'دولة', 'start': 28, 'end': 32}, {'entity': 'noun', 'score': 0.9999795, 'index': 8, 'word': 'الإمارات', 'start': 33, 'end': 41}, {'entity': 'adj', 'score': 0.99924207, 'index': 9, 'word': 'العربية', 'start': 42, 'end': 49}, {'entity': 'adj', 'score': 0.99994195, 'index': 10, 'word': 'المتحدة', 'start': 50, 'end': 57}, {'entity': 'noun_num', 'score': 0.9997414, 'index': 11, 'word': 'السبع', 'start': 58, 'end': 63}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
alenusch/mt5base-ruparaphraser | 827b216941a88188c88ebbe0f35c0aaa87a8f642 | 2020-12-19T17:39:00.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alenusch | null | alenusch/mt5base-ruparaphraser | 184 | null | transformers | 3,722 | Entry not found |
digio/Twitter4SSE | 91e20090fe4fa7e57fd3ebfad3dd89c9538b5669 | 2021-12-17T09:01:29.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"en",
"arxiv:2110.02030",
"transformers",
"Pytorch",
"Sentence Transformers",
"Transformers",
"license:apache-2.0",
"sentence-similarity"
] | sentence-similarity | false | digio | null | digio/Twitter4SSE | 184 | 1 | transformers | 3,723 | ---
language:
- en
pipeline_tag: sentence-similarity
tags:
- Pytorch
- Sentence Transformers
- Transformers
license: "apache-2.0"
---
# Twitter4SSE
This model maps texts to 768 dimensional dense embeddings that encode semantic similarity.
It was trained with Multiple Negatives Ranking Loss (MNRL) on a Twitter dataset.
It was initialized from [BERTweet](https://huggingface.co/vinai/bertweet-base) and trained with [Sentence-transformers](https://www.sbert.net/).
## Usage
The model is easier to use with sentence-trainsformers library
```
pip install -U sentence-transformers
```
```
from sentence_transformers import SentenceTransformer
sentences = ["This is the first tweet", "This is the second tweet"]
model = SentenceTransformer('digio/Twitter4SSE')
embeddings = model.encode(sentences)
print(embeddings)
```
Without sentence-transfomer library, please refer to [this repository](https://huggingface.co/sentence-transformers) for detailed instructions on how to use Sentence Transformers on Huggingface.
## Citing & Authors
The official paper [Exploiting Twitter as Source of Large Corpora of Weakly Similar Pairs for Semantic Sentence Embeddings](https://arxiv.org/abs/2110.02030) will be presented at EMNLP 2021. Further details will be available soon.
```
@inproceedings{di-giovanni-brambilla-2021-exploiting,
title = "Exploiting {T}witter as Source of Large Corpora of Weakly Similar Pairs for Semantic Sentence Embeddings",
author = "Di Giovanni, Marco and
Brambilla, Marco",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.780",
pages = "9902--9910",
}
```
The official code is available on [GitHub](https://github.com/marco-digio/Twitter4SSE)
|
philschmid/distilbert-base-multilingual-cased-sentiment | b45a713783e49ac09c94dfda4bff847f4ad771c5 | 2022-01-24T12:14:53.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | philschmid | null | philschmid/distilbert-base-multilingual-cased-sentiment | 184 | null | transformers | 3,724 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-multilingual-cased-sentiment
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: all_languages
metrics:
- name: Accuracy
type: accuracy
value: 0.7648
- name: F1
type: f1
value: 0.7648
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-sentiment
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5842
- Accuracy: 0.7648
- F1: 0.7648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.6405 | 0.53 | 5000 | 0.5826 | 0.7498 | 0.7498 |
| 0.5698 | 1.07 | 10000 | 0.5686 | 0.7612 | 0.7612 |
| 0.5286 | 1.6 | 15000 | 0.5593 | 0.7636 | 0.7636 |
| 0.5141 | 2.13 | 20000 | 0.5842 | 0.7648 | 0.7648 |
| 0.4763 | 2.67 | 25000 | 0.5736 | 0.7637 | 0.7637 |
| 0.4549 | 3.2 | 30000 | 0.6027 | 0.7593 | 0.7593 |
| 0.4231 | 3.73 | 35000 | 0.6017 | 0.7552 | 0.7552 |
| 0.3965 | 4.27 | 40000 | 0.6489 | 0.7551 | 0.7551 |
| 0.3744 | 4.8 | 45000 | 0.6426 | 0.7534 | 0.7534 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
facebook/maskformer-swin-base-ade | 4f7b799a3566c531042bb79874739fdc0522e20e | 2022-04-04T16:01:58.000Z | [
"pytorch",
"maskformer",
"dataset:ade-20k",
"arxiv:2107.06278",
"transformers",
"vision",
"image-segmentatiom",
"license:apache-2.0"
] | null | false | facebook | null | facebook/maskformer-swin-base-ade | 184 | null | transformers | 3,725 | ---
license: apache-2.0
tags:
- vision
- image-segmentatiom
datasets:
- ade-20k
widget:
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg
example_title: House
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg
example_title: Castle
---
# Mask
Mask model trained on ade-20k. It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing Mask did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses semantic segmentation with a mask classification paradigm instead.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=maskformer) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-base-ade")
>>> inputs = feature_extractor(images=image, return_tensors="pt")
>>> model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-ade")
>>> outputs = model(**inputs)
>>> # model predicts class_queries_logits of shape `(batch_size, num_queries)`
>>> # and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
>>> class_queries_logits = outputs.class_queries_logits
>>> masks_queries_logits = outputs.masks_queries_logits
>>> # you can pass them to feature_extractor for postprocessing
>>> output = feature_extractor.post_process_segmentation(outputs)
>>> output = feature_extractor.post_process_semantic_segmentation(outputs)
>>> output = feature_extractor.post_process_panoptic_segmentation(outputs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). |
BlackSamorez/ebanko-base | f21b1aa7d0a080a8cce0f154dcc9f9eae4888eac | 2022-04-29T12:29:02.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ru",
"transformers",
"PyTorch",
"Transformers",
"autotrain_compatible"
] | text2text-generation | false | BlackSamorez | null | BlackSamorez/ebanko-base | 184 | null | transformers | 3,726 | ---
language:
- ru
tags:
- PyTorch
- Transformers
---
# ebanko-base
Model was finetuned by [black_samorez](https://github.com/BlackSamorez).
Based off [sberbank-ai/ruT5-base](https://huggingface.co/sberbank-ai/ruT5-base).
Finetuned on [
russe_detox_2022](https://github.com/skoltech-nlp/russe_detox_2022) train to toxify text.
I recommend using it with **temperature = 1.5**
* Task: `text2text generation`
* Type: `encoder-decoder`
* Tokenizer: `bpe`
* Dict size: `32 101`
* Num Parameters: `222 M`
---
license: apache-2.0
---
|
fujuta/DialoGPT-medium-RonWeasley | c3fe70b1d2f4aa27ea3d98e5505a2e99c18b5f49 | 2022-05-25T00:23:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | fujuta | null | fujuta/DialoGPT-medium-RonWeasley | 184 | null | transformers | 3,727 | ---
tags:
- conversational
--- |
alenusch/mt5small-ruparaphraser | 1d8fcea52a546efa20a28147225c56840dd4b5e8 | 2021-06-23T15:05:47.000Z | [
"pytorch",
"jax",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alenusch | null | alenusch/mt5small-ruparaphraser | 183 | null | transformers | 3,728 | Entry not found |
ghadeermobasher/BC2GM-Gene_ImbalancedBioM-ELECTRA-Base-Discriminator | 4c5e16ad0d7c77d23da99e5baa2868faeba69a09 | 2022-01-23T01:04:00.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/BC2GM-Gene_ImbalancedBioM-ELECTRA-Base-Discriminator | 183 | null | transformers | 3,729 | Entry not found |
surajp/gpt2-hindi | bb760a44a89a1fada37bafaa5b68f5f791b54c93 | 2021-05-23T13:02:32.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | surajp | null | surajp/gpt2-hindi | 183 | null | transformers | 3,730 | Entry not found |
IDEA-CCNL/YuyuanQA-GPT2-3.5B | 3e50e27e3aded301abff14d19cd259b5740929fc | 2022-04-18T02:46:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"QA",
"medical",
"license:apache-2.0"
] | text-generation | false | IDEA-CCNL | null | IDEA-CCNL/YuyuanQA-GPT2-3.5B | 183 | null | transformers | 3,731 | ---
language:
- en
inference:
parameters:
temperature: 0.7
top_p: 0.6
max_new_tokens: 64
num_return_sequences: 3
do_sample: true
license: apache-2.0
tags:
- QA
- medical
- gpt2
widget:
- text: "Question:What should gout patients pay attention to in diet? Answer:"
example_title: "test Question1"
- text: "Question:How should covid-19 be prevented? Answer:"
example_title: "test Question2"
---
# YuyuanQA-GPT2-3.5B model (Medical),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
**YuyuanQA-GPT2-3.5B** is fine-tuned with 10000 medical QA pairs based on **Yuyuan-3.5B** model.
**Question answering(QA)** is an important subject related to natural language processing and information retrieval. There are many application scenarios in the actual industry. **Traditional methods are often complex**, and their core algorithms involve **machine learning**, **deep learning** and **knowledge graph** related knowledge.
We hope to explore a **simpler** and more **effective** way to use the powerful memory and understanding ability of the large model to directly realize question and answering. Yuyuanqa-GPT2-3.5b model is an attempt and **performs well under subjective test**. At the same time, we also tested 100 QA pairs with ***BLEU***:
| gram | 1-gram | 2-gram | 3-gram | 4-gram |
| ----------- | ----------- |------|------|------|
| **blue_score** | 0.357727 | 0.2713 | 0.22304 | 0.19099 |
## Usage
### load model
```python
from transformers import GPT2Tokenizer,GPT2LMHeadModel
hf_model_path = 'model_path or model name'
tokenizer = GPT2Tokenizer.from_pretrained(hf_model_path)
model = GPT2LMHeadModel.from_pretrained(hf_model_path)
```
### generation
```python
fquestion = "What should gout patients pay attention to in diet?"
inputs = tokenizer(f'Question:{question} answer:',return_tensors='pt')
generation_output = model.generate(**inputs,
return_dict_in_generate=True,
output_scores=True,
max_length=150,
# max_new_tokens=80,
do_sample=True,
top_p = 0.6,
eos_token_id=50256,
pad_token_id=0,
num_return_sequences = 5)
for idx,sentence in enumerate(generation_output.sequences):
print('next sentence %d:\n'%idx,
tokenizer.decode(sentence).split('<|endoftext|>')[0])
print('*'*40)
```
## example
We made a demo of medical Q & A with YuyuanQA-GPT2-3.5B model. In the future, we will make this product into a wechat app to meet you. Please look forward to it.

## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2022},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
|
tezign/BERT-LSTM-based-ABSA | 30bf7f71427501ca7a5300825589f2a708843566 | 2022-07-20T10:14:35.000Z | [
"pytorch",
"BertABSAForSequenceClassification",
"text-classification",
"en",
"dataset:semeval2014",
"arxiv:2002.04815",
"transformers",
"aspect-term-sentiment-analysis",
"ATSA"
] | text-classification | false | tezign | null | tezign/BERT-LSTM-based-ABSA | 183 | null | transformers | 3,732 | ---
language: en
tags:
- aspect-term-sentiment-analysis
- pytorch
- ATSA
datasets:
- semeval2014
widget:
- text: "[CLS] The appearance is very nice, but the battery life is poor. [SEP] appearance [SEP] "
---
# Note
`Aspect term sentiment analysis`
BERT LSTM based baseline, based on https://github.com/avinashsai/BERT-Aspect *BERT LSTM* implementation.The model trained on SemEval2014-Task 4 laptop and restaurant datasets.
Our Github repo: https://github.com/tezignlab/BERT-LSTM-based-ABSA
Code for the paper "Utilizing BERT Intermediate Layers for Aspect Based Sentiment Analysis and Natural Language Inference" https://arxiv.org/pdf/2002.04815.pdf.
# Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline
MODEL = "tezign/BERT-LSTM-based-ABSA"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModelForSequenceClassification.from_pretrained(MODEL, trust_remote_code=True)
classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer)
result = classifier([
{"text": "The appearance is very nice, but the battery life is poor", "text_pair": "appearance"},
{"text": "The appearance is very nice, but the battery life is poor", "text_pair": "battery"}
],
function_to_apply="softmax")
print(result)
"""
print result
>> [{'label': 'positive', 'score': 0.9129462838172913}, {'label': 'negative', 'score': 0.8834680914878845}]
"""
``` |
kakife3586/Eka.mini | c9ae92c81c76361f42058960d4f6a3dfbf6284fd | 2022-07-15T08:21:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | kakife3586 | null | kakife3586/Eka.mini | 183 | null | transformers | 3,733 | Entry not found |
hfl/chinese-legal-electra-base-generator | c1f0ad95b487f1ed588a5e111095a16e01333b95 | 2021-10-30T23:52:25.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"zh",
"arxiv:2004.13922",
"transformers",
"license:apache-2.0"
] | null | false | hfl | null | hfl/chinese-legal-electra-base-generator | 182 | 2 | transformers | 3,734 | ---
language:
- zh
license: "apache-2.0"
---
# This model is specifically designed for legal domain.
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
``` |
veroman/TourBERT | 81901413624c13f4867b0c16c1fc4f9bb4ca67ea | 2022-01-13T20:38:31.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | veroman | null | veroman/TourBERT | 182 | null | transformers | 3,735 | Entry not found |
Jenwvwmabskvwh/DialoGPT-small-josh444 | 706a7ca7d7dc6c991e8672b6d4cdfa85cb0e316f | 2022-07-25T09:19:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Jenwvwmabskvwh | null | Jenwvwmabskvwh/DialoGPT-small-josh444 | 182 | 0 | transformers | 3,736 | ---
tags:
- conversational
---
# Josh DialoGPT Model |
cometrain/neurotitle-rugpt3-small | c97068c6016e5162e6cb15b1c5ea8dbf828f50bc | 2021-12-07T07:54:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ru",
"en",
"dataset:All-NeurIPS-Papers-Scraper",
"transformers",
"Cometrain AutoCode",
"Cometrain AlphaML"
] | text-generation | false | cometrain | null | cometrain/neurotitle-rugpt3-small | 181 | 1 | transformers | 3,737 | ---
language:
- ru
- en
tags:
- Cometrain AutoCode
- Cometrain AlphaML
datasets:
- All-NeurIPS-Papers-Scraper
widget:
- text: "NIPSE:"
example_title: "NIPS"
- text: "Learning CNN"
example_title: "Learning CNN"
- text: "ONNX:"
example_title: "ONNX"
- text: "BERT:"
example_title: "BERT"
inference:
parameters:
temperature: 0.9
---
# neurotitle-rugpt3-small
Model based on [ruGPT-3](https://huggingface.co/sberbank-ai) for generating scientific paper titles.
Trained on [All NeurIPS (NIPS) Papers](https://www.kaggle.com/rowhitswami/nips-papers-1987-2019-updated) dataset.
Use exclusively as a crazier alternative to SCIgen.
## Made with Cometrain AlphaML & AutoCode
This model was automatically fine-tuned using the Cometrain AlphaML framework and tested with CI/CD pipeline made by Cometrain AutoCode
## Cometrain AlphaML command
```shell
$ cometrain create --name neurotitle --model auto --task task_0x2231.txt --output transformers
```
## Use with Transformers
```python
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model="CometrainResearch/neurotitle-rugpt3-small")
generator("BERT:", max_length=50)
```
|
M-FAC/bert-mini-finetuned-sst2 | 431f327124d470b035264800d0eaabcc667979e2 | 2021-12-13T08:13:26.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"transformers"
] | text-classification | false | M-FAC | null | M-FAC/bert-mini-finetuned-sst2 | 181 | null | transformers | 3,738 | # BERT-mini model finetuned with M-FAC
This model is finetuned on SST-2 dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on SST-2 validation set:
```bash
accuracy = 84.74
```
Mean and standard deviation for 5 runs on SST-2 validation set:
| | Accuracy |
|:----:|:-----------:|
| Adam | 85.46 ± 0.58 |
| M-FAC | 84.20 ± 0.58 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 1234 \
--model_name_or_path prajjwal1/bert-mini \
--task_name sst2 \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 3 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
|
cambridgeltl/trans-encoder-bi-simcse-roberta-large | a1ac9780910ff21dfa0c90296b8e221ead1f55a7 | 2021-10-18T13:29:43.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"arxiv:2109.13059",
"transformers"
] | feature-extraction | false | cambridgeltl | null | cambridgeltl/trans-encoder-bi-simcse-roberta-large | 181 | null | transformers | 3,739 | ---
language: en
tags:
- sentence-embeddings
- sentence-similarity
- dual-encoder
### cambridgeltl/trans-encoder-bi-simcse-roberta-large
An unsupervised sentence encoder (bi-encoder) proposed by [Liu et al. (2021)](https://arxiv.org/pdf/2109.13059.pdf). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using [princeton-nlp/unsup-simcse-roberta-large](https://huggingface.co/princeton-nlp/unsup-simcse-roberta-large) as the base model. Please use `[CLS]` (before pooler) as the representation of the input.
### Citation
```bibtex
@article{liu2021trans,
title={Trans-Encoder: Unsupervised sentence-pair modelling through self-and mutual-distillations},
author={Liu, Fangyu and Jiao, Yunlong and Massiah, Jordan and Yilmaz, Emine and Havrylov, Serhii},
journal={arXiv preprint arXiv:2109.13059},
year={2021}
}
```
|
mrm8488/GPT-2-finetuned-common_gen | 987b8a56a1954c7b785f52c4940f9487fa42049f | 2021-05-23T10:12:07.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:common_gen",
"transformers"
] | text-generation | false | mrm8488 | null | mrm8488/GPT-2-finetuned-common_gen | 181 | 2 | transformers | 3,740 | ---
language: en
datasets:
- common_gen
widget:
- text: "<|endoftext|> apple, tree, pick:"
---
# GPT-2 fine-tuned on CommonGen
[GPT-2](https://huggingface.co/gpt2) fine-tuned on [CommonGen](https://inklab.usc.edu/CommonGen/index.html) for *Generative Commonsense Reasoning*.
## Details of GPT-2
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
## Details of the dataset 📚
CommonGen is a constrained text generation task, associated with a benchmark dataset, to explicitly test machines for the ability of generative commonsense reasoning. Given a set of common concepts; the task is to generate a coherent sentence describing an everyday scenario using these concepts.
CommonGen is challenging because it inherently requires 1) relational reasoning using background commonsense knowledge, and 2) compositional generalization ability to work on unseen concept combinations. Our dataset, constructed through a combination of crowd-sourcing from AMT and existing caption corpora, consists of 30k concept-sets and 50k sentences in total.
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| common_gen | train | 67389 |
| common_gen | valid | 4018 |
| common_gen | test | 1497 |
## Model fine-tuning 🏋️
You can find the fine-tuning script [here](https://github.com/huggingface/transformers/tree/master/examples/language-modeling)
## Model in Action 🚀
```bash
python ./transformers/examples/text-generation/run_generation.py \
--model_type=gpt2 \
--model_name_or_path="mrm8488/GPT-2-finetuned-common_gen" \
--num_return_sequences 1 \
--prompt "<|endoftext|> kid, room, dance:" \
--stop_token "."
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrm8488/spanish-gpt2 | c53565b371d3d52620c5ef24800124763ecb4b54 | 2021-07-16T11:02:28.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"es",
"dataset:large_spanish_corpus",
"transformers",
"GPT-2",
"license:mit"
] | text-generation | false | mrm8488 | null | mrm8488/spanish-gpt2 | 181 | 5 | transformers | 3,741 | ---
language: es
tags:
- GPT-2
datasets:
- large_spanish_corpus
widgets:
- text: "Érase un vez un"
license: mit
---
# Spanish GPT-2 trained on [large_spanish_corpus](https://huggingface.co/datasets/viewer/?dataset=large_spanish_corpus)
This is a Spanish GPT-2 model trained from scratch on the [large_spanish_corpus](https://huggingface.co/datasets/viewer/?dataset=large_spanish_corpus) aka BETO's corpus with [Flax](https://github.com/google/flax)
This is part of the
[Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
## Dataset
The dataset is about 20 GB. 95% of the data was used for training and the rest 5% for validation.
## Metrics (on evaluation dataset)
- Loss: 2.413
- Perplexity: 11.36
## Team members
- Manuel Romero ([mrm8488](https://huggingface.co/mrm8488))
- María Grandury ([mariagrandury](https://huggingface.co/))
- Pablo González de Prado ([Pablogps](https://huggingface.co/Pablogps))
- Daniel Vera ([daveni](https://huggingface.co/daveni))
- Sri Lakshmi ([srisweet](https://huggingface.co/srisweet))
- José Posada ([jdposa](https://huggingface.co/jdposa))
- Santiago Hincapie ([shpotes](https://huggingface.co/shpotes))
- Jorge ([jorgealro](https://huggingface.co/jorgealro))
## Useful links
- [Community Week timeline](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104#summary-timeline-calendar-6)
- [Community Week README](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md)
- [Community Week thread](https://discuss.huggingface.co/t/pretrain-gpt2-from-scratch-in-spanish/7086/8) |
lewiswu1209/Vicky | 8b1dcb0d9fcbddbaa3cc9bec0ba57596b7f954e3 | 2022-07-17T10:01:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:mit"
] | text-generation | false | lewiswu1209 | null | lewiswu1209/Vicky | 181 | null | transformers | 3,742 | ---
license: mit
---
# Vicky
Vicky是引用自开源项目GPT2-chitchat的作者分享的[50w闲聊语料训练的模型](https://github.com/yangjianxin1/GPT2-chitchat/#model_share)
我修改了vocab.txt, 新增了`[NAME][NICK][GENDER][YEAROFBIRTH][MONTHOFBIRTH][DAYOFBIRTH][ZODIAC][AGE]`几个token,然后搞了些类似
```
你是谁?
我是[NAME]。
你叫什么?
我叫[NAME]。
你多大啦?
我[AGE]岁了。
```
的语料。
但是好像被我把脑子训瓦特了,以后再弄点语料试试看看能不能训回来。 |
Nakul24/AD_ChatBot | 143b0aba3a0b91823cde5725f99040370122bbda | 2022-07-17T08:51:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Nakul24 | null | Nakul24/AD_ChatBot | 181 | null | transformers | 3,743 | ---
tags:
- conversational
---
# Hello |
Helsinki-NLP/opus-mt-ilo-en | cb5fb67823253ca21e643f7ef6636bc6cbdaab34 | 2020-08-21T14:42:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ilo",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ilo-en | 180 | null | transformers | 3,744 | ---
language:
- ilo
- en
tags:
- translation
license: apache-2.0
---
### ilo-eng
* source group: Iloko
* target group: English
* OPUS readme: [ilo-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ilo-eng/README.md)
* model: transformer-align
* source language(s): ilo
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ilo-eng/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ilo-eng/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ilo-eng/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ilo.eng | 36.4 | 0.558 |
### System Info:
- hf_name: ilo-eng
- source_languages: ilo
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ilo-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ilo', 'en']
- src_constituents: {'ilo'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ilo-eng/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ilo-eng/opus-2020-06-16.test.txt
- src_alpha3: ilo
- tgt_alpha3: eng
- short_pair: ilo-en
- chrF2_score: 0.5579999999999999
- bleu: 36.4
- brevity_penalty: 1.0
- ref_len: 7384.0
- src_name: Iloko
- tgt_name: English
- train_date: 2020-06-16
- src_alpha2: ilo
- tgt_alpha2: en
- prefer_old: False
- long_pair: ilo-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
keepitreal/vietnamese-sbert | a9467ef2ef47caa6448edeabfd8e5e5ce0fa2a23 | 2022-02-19T08:01:34.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers",
"vietnamese"
] | sentence-similarity | false | keepitreal | null | keepitreal/vietnamese-sbert | 180 | 2 | sentence-transformers | 3,745 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- vietnamese
---
# {vietnamese-sbert}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search on Vietnamese language.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Cô giáo đang ăn kem", "Chị gái đang thử món thịt dê"]
model = SentenceTransformer('keepitreal/vietnamese-sbert')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['Cô giáo đang ăn kem', 'Chị gái đang thử món thịt dê']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained(''keepitreal/vietnamese-sbert')
model = AutoModel.from_pretrained('keepitreal/vietnamese-sbert')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 360 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 144,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
vasudevgupta/mbart-iitb-hin-eng | 34cac4c29ccb030fd43a1d678d3854a940848062 | 2021-05-12T03:35:21.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"dataset:pib",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | vasudevgupta | null | vasudevgupta/mbart-iitb-hin-eng | 180 | 1 | transformers | 3,746 | ---
datasets: pib
widget:
- text: "नमस्ते! मैं वासुदेव गुप्ता हूं"
---
mBART (a pre-trained model by Facebook) is pre-trained to de-noise multiple languages simultaneously with BART objective.
Checkpoint available in this repository is obtained after fine-tuning `facebook/mbart-large-cc25` on 0.5 M samples from IIT-B Hindi-English parallel corpus. This checkpoint gives decent results for Hindi-english translation. |
l3cube-pune/hing-bert-lid | c14d7b0a643791e4b1f3c0fd3b0aa3496602908e | 2022-06-26T15:08:11.000Z | [
"pytorch",
"bert",
"token-classification",
"hi",
"en",
"dataset:L3Cube-HingCorpus",
"dataset:L3Cube-HingLID",
"arxiv:2204.08398",
"transformers",
"codemix",
"license:cc-by-4.0",
"autotrain_compatible"
] | token-classification | false | l3cube-pune | null | l3cube-pune/hing-bert-lid | 180 | 1 | transformers | 3,747 | ---
license: cc-by-4.0
language:
- hi
- en
tags:
- hi
- en
- codemix
datasets:
- L3Cube-HingCorpus
- L3Cube-HingLID
---
## HingBERT-LID
HingBERT-LID is a Hindi-English code-mixed language identification BERT model. It is a HingBERT model fine-tuned on L3Cube-HingLID dataset.
<br>
[dataset link] (https://github.com/l3cube-pune/code-mixed-nlp)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398)
```
@InProceedings{nayak-joshi:2022:WILDRE6,
author = {Nayak, Ravindra and Joshi, Raviraj},
title = {L3Cube-HingCorpus and HingBERT: A Code Mixed Hindi-English Dataset and BERT Language Models},
booktitle = {Proceedings of The WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {7--12},
}
``` |
csebuetnlp/mT5_m2o_arabic_crossSum | e5c5b34c1d0853f4f8015e24223dda7c1c856787 | 2022-04-22T15:05:09.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"am",
"ar",
"az",
"bn",
"my",
"zh",
"en",
"fr",
"gu",
"ha",
"hi",
"ig",
"id",
"ja",
"rn",
"ko",
"ky",
"mr",
"ne",
"om",
"ps",
"fa",
"pcm",
"pt",
"pa",
"ru",
"gd",
"sr",
"si",
"so",
"es",
"sw",
"ta",
"te",
"th",
"ti",
"tr",
"uk",
"ur",
"uz",
"vi",
"cy",
"yo",
"arxiv:2112.08804",
"transformers",
"summarization",
"mT5",
"autotrain_compatible"
] | summarization | false | csebuetnlp | null | csebuetnlp/mT5_m2o_arabic_crossSum | 180 | null | transformers | 3,748 | ---
tags:
- summarization
- mT5
language:
- am
- ar
- az
- bn
- my
- zh
- en
- fr
- gu
- ha
- hi
- ig
- id
- ja
- rn
- ko
- ky
- mr
- ne
- om
- ps
- fa
- pcm
- pt
- pa
- ru
- gd
- sr
- si
- so
- es
- sw
- ta
- te
- th
- ti
- tr
- uk
- ur
- uz
- vi
- cy
- yo
licenses:
- cc-by-nc-sa-4.0
widget:
- text: "Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs \"spill over into misinformation about vaccines in general\". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. \"We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO,\" the post said, referring to the World Health Organization."
---
# mT5-m2o-arabic-CrossSum
This repository contains the many-to-one (m2o) mT5 checkpoint finetuned on all cross-lingual pairs of the [CrossSum](https://huggingface.co/datasets/csebuetnlp/CrossSum) dataset, where the target summary was in **arabic**, i.e. this model tries to **summarize text written in any language in Arabic.** For finetuning details and scripts, see the [paper](https://arxiv.org/abs/2112.08804) and the [official repository](https://github.com/csebuetnlp/CrossSum).
## Using this model in `transformers` (tested on 4.11.0.dev0)
```python
import re
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip()))
article_text = """Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. "We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO," the post said, referring to the World Health Organization."""
model_name = "csebuetnlp/mT5_m2o_arabic_crossSum"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
input_ids = tokenizer(
[WHITESPACE_HANDLER(article_text)],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
max_length=84,
no_repeat_ngram_size=2,
num_beams=4
)[0]
summary = tokenizer.decode(
output_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
print(summary)
```
## Citation
If you use this model, please cite the following paper:
```
@article{hasan2021crosssum,
author = {Tahmid Hasan and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yuan-Fang Li and Yong-bin Kang and Rifat Shahriyar},
title = {CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs},
journal = {CoRR},
volume = {abs/2112.08804},
year = {2021},
url = {https://arxiv.org/abs/2112.08804},
eprinttype = {arXiv},
eprint = {2112.08804}
}
``` |
skytnt/gpt2-japanese-lyric-medium | a63ce41c4ccc273fc55f3d5a7358aa0bb13f30c5 | 2022-07-09T01:23:58.000Z | [
"pytorch",
"tf",
"gpt2",
"text-generation",
"ja",
"transformers",
"japanese",
"lm",
"nlp",
"license:mit"
] | text-generation | false | skytnt | null | skytnt/gpt2-japanese-lyric-medium | 180 | null | transformers | 3,749 | ---
language: ja
tags:
- ja
- japanese
- gpt2
- text-generation
- lm
- nlp
license: mit
widget:
- text: "<s>桜[CLS]"
---
# Japanese GPT2 Lyric Model
## Model description
The model is used to generate Japanese lyrics.
## How to use
```python
import torch
from transformers import T5Tokenizer, GPT2LMHeadModel
device = torch.device("cpu")
if torch.cuda.is_available():
device = torch.device("cuda")
tokenizer = T5Tokenizer.from_pretrained("skytnt/gpt2-japanese-lyric-medium")
model = GPT2LMHeadModel.from_pretrained("skytnt/gpt2-japanese-lyric-medium")
model = model.to(device)
def gen_lyric(title: str, prompt_text: str):
if len(title)!= 0 or len(prompt_text)!= 0:
prompt_text = "<s>" + title + "[CLS]" + prompt_text
prompt_text = prompt_text.replace("\n", "\\n ")
prompt_tokens = tokenizer.tokenize(prompt_text)
prompt_token_ids = tokenizer.convert_tokens_to_ids(prompt_tokens)
prompt_tensor = torch.LongTensor(prompt_token_ids)
prompt_tensor = prompt_tensor.view(1, -1).to(device)
else:
prompt_tensor = None
# model forward
output_sequences = model.generate(
input_ids=prompt_tensor,
max_length=512,
top_p=0.95,
top_k=40,
temperature=1.0,
do_sample=True,
early_stopping=True,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
num_return_sequences=1
)
# convert model outputs to readable sentence
generated_sequence = output_sequences.tolist()[0]
generated_tokens = tokenizer.convert_ids_to_tokens(generated_sequence)
generated_text = tokenizer.convert_tokens_to_string(generated_tokens)
generated_text = "\n".join([s.strip() for s in generated_text.split('\\n')]).replace(' ', '\u3000').replace('<s>', '').replace('</s>', '\n\n---end---')
title_and_lyric = generated_text.split("[CLS]",1)
if len(title_and_lyric)==1:
title,lyric = "" , title_and_lyric[0].strip()
else:
title,lyric = title_and_lyric[0].strip(), title_and_lyric[1].strip()
return f"---{title}---\n\n{lyric}"
print(gen_lyric("桜",""))
```
## Training data
[Training data](https://github.com/SkyTNT/gpt2-japanese-lyric/blob/main/lyric_ids_titled.pkl) contains 143,587 Japanese lyrics which are collected from [uta-net](https://www.uta-net.com/) by [lyric_download](https://github.com/SkyTNT/lyric_downlowd)
|
tinkoff-ai/ruDialoGPT-medium | 0b547e7cb5503ac46c1b4f89600d1d7177e740e2 | 2022-07-19T20:27:25.000Z | [
"pytorch",
"gpt2",
"ru",
"arxiv:2001.09977",
"transformers",
"conversational",
"license:mit",
"text-generation"
] | text-generation | false | tinkoff-ai | null | tinkoff-ai/ruDialoGPT-medium | 180 | null | transformers | 3,750 | ---
license: mit
pipeline_tag: text-generation
widget:
- text: "@@ПЕРВЫЙ@@ привет @@ВТОРОЙ@@ привет @@ПЕРВЫЙ@@ как дела? @@ВТОРОЙ@@"
example_title: "how r u"
- text: "@@ПЕРВЫЙ@@ что ты делал на выходных? @@ВТОРОЙ@@"
example_title: "wyd"
language:
- ru
tags:
- conversational
---
This generation model is based on [sberbank-ai/rugpt3medium_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3medium_based_on_gpt2). It's trained on large corpus of dialog data and can be used for buildning generative conversational agents
The model was trained with context size 3
On a private validation set we calculated metrics introduced in [this paper](https://arxiv.org/pdf/2001.09977.pdf):
- Sensibleness: Crowdsourcers were asked whether model's response makes sense given the context
- Specificity: Crowdsourcers were asked whether model's response is specific for given context, in other words we don't want our model to give general and boring responses
- SSA which is the average of two metrics above (Sensibleness Specificity Average)
| | sensibleness | specificity | SSA |
|:----------------------------------------------------|---------------:|--------------:|------:|
| [tinkoff-ai/ruDialoGPT-small](https://huggingface.co/tinkoff-ai/ruDialoGPT-small) | 0.64 | 0.5 | 0.57 |
| [tinkoff-ai/ruDialoGPT-medium](https://huggingface.co/tinkoff-ai/ruDialoGPT-medium) | 0.78 | 0.69 | 0.735 |
How to use:
```python
import torch
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained('tinkoff-ai/ruDialoGPT-medium')
model = AutoModelWithLMHead.from_pretrained('tinkoff-ai/ruDialoGPT-medium')
inputs = tokenizer('@@ПЕРВЫЙ@@ привет @@ВТОРОЙ@@ привет @@ПЕРВЫЙ@@ как дела? @@ВТОРОЙ@@', return_tensors='pt')
generated_token_ids = model.generate(
**inputs,
top_k=10,
top_p=0.95,
num_beams=3,
num_return_sequences=3,
do_sample=True,
no_repeat_ngram_size=2,
temperature=1.2,
repetition_penalty=1.2,
length_penalty=1.0,
eos_token_id=50257,
max_new_tokens=40
)
context_with_response = [tokenizer.decode(sample_token_ids) for sample_token_ids in generated_token_ids]
context_with_response
``` |
Rifky/Indobert-QA | 9c00be563bd5370d983746480fa1545ef9cc08ee | 2021-10-08T12:04:06.000Z | [
"pytorch",
"bert",
"question-answering",
"id",
"dataset:220M words (IndoWiki, IndoWC, News)",
"dataset:Squad 2.0 (Indonesian translated)",
"transformers",
"indobert",
"indolem",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | Rifky | null | Rifky/Indobert-QA | 179 | 2 | transformers | 3,751 | ---
language: id
tags:
- indobert
- indolem
license: apache-2.0
datasets:
- 220M words (IndoWiki, IndoWC, News)
- Squad 2.0 (Indonesian translated)
widget:
- text: kapan pangeran diponegoro lahir?
context: Pangeran Harya Dipanegara (atau biasa dikenal dengan nama Pangeran Diponegoro,
lahir di Ngayogyakarta Hadiningrat, 11 November 1785 – meninggal di Makassar,
Hindia Belanda, 8 Januari 1855 pada umur 69 tahun) adalah salah seorang pahlawan
nasional Republik Indonesia, yang memimpin Perang Diponegoro atau Perang Jawa
selama periode tahun 1825 hingga 1830 melawan pemerintah Hindia Belanda. Sejarah
mencatat, Perang Diponegoro atau Perang Jawa dikenal sebagai perang yang menelan
korban terbanyak dalam sejarah Indonesia, yakni 8.000 korban serdadu Hindia Belanda,
7.000 pribumi, dan 200 ribu orang Jawa serta kerugian materi 25 juta Gulden.
---
[Github](https://github.com/rifkybujana/IndoBERT-QA)
This project is part of my research with my friend Muhammad Fajrin Buyang Daffa entitled "Teman Belajar : Asisten Digital Pelajar SMA Negeri 28 Jakarta dalam Membaca" for KOPSI (Kompetisi Penelitian Siswa Indonesia/Indonesian Student Research Competition).
## indoBERT Base-Uncased fine-tuned on Translated Squad v2.0
[IndoBERT](https://huggingface.co/indolem/indobert-base-uncased) trained by [IndoLEM](https://indolem.github.io/) and fine-tuned on [Translated SQuAD 2.0](https://github.com/Wikidepia/indonesian_datasets/tree/master/question-answering/squad) for **Q&A** downstream task.
**Model Size** (after training): 420mb
## Details of indoBERT (from their documentation)
[IndoBERT](https://huggingface.co/indolem/indobert-base-uncased) is the Indonesian version of BERT model. We train the model using over 220M words, aggregated from three main sources:
- Indonesian Wikipedia (74M words)
- news articles from Kompas, Tempo (Tala et al., 2003), and Liputan6 (55M words in total)
- an Indonesian Web Corpus (Medved and Suchomel, 2017) (90M words).
We trained the model for 2.4M steps (180 epochs) with the final perplexity over the development set being 3.97 (similar to English BERT-base).
This IndoBERT was used to examine IndoLEM - an Indonesian benchmark that comprises of seven tasks for the Indonesian language, spanning morpho-syntax, semantics, and discourse.[[1]](#1)
## Details of the downstream task (Q&A) - Dataset
SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD2.0 | train | 130k |
| SQuAD2.0 | eval | 12.3k |
## Model Training
The model was trained on a Tesla T4 GPU and 12GB of RAM.
## Results:
| Metric | # Value |
| ------ | --------- |
| **EM** | **51.61** |
| **F1** | **69.09** |
## Simple Usage
```py
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="Rifky/Indobert-QA",
tokenizer="Rifky/Indobert-QA"
)
qa_pipeline({
'context': """Pangeran Harya Dipanegara (atau biasa dikenal dengan nama Pangeran Diponegoro, lahir di Ngayogyakarta Hadiningrat, 11 November 1785 – meninggal di Makassar, Hindia Belanda, 8 Januari 1855 pada umur 69 tahun) adalah salah seorang pahlawan nasional Republik Indonesia, yang memimpin Perang Diponegoro atau Perang Jawa selama periode tahun 1825 hingga 1830 melawan pemerintah Hindia Belanda. Sejarah mencatat, Perang Diponegoro atau Perang Jawa dikenal sebagai perang yang menelan korban terbanyak dalam sejarah Indonesia, yakni 8.000 korban serdadu Hindia Belanda, 7.000 pribumi, dan 200 ribu orang Jawa serta kerugian materi 25 juta Gulden.""",
'question': "kapan pangeran diponegoro lahir?"
})
```
*output:*
```py
{
'answer': '11 November 1785',
'end': 131,
'score': 0.9272009134292603,
'start': 115
}
```
### Reference
<a id="1">[1]</a>Fajri Koto and Afshin Rahimi and Jey Han Lau and Timothy Baldwin. 2020. IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP. Proceedings of the 28th COLING. |
SEBIS/code_trans_t5_large_code_documentation_generation_java_multitask_finetune | 9f4f6210a883876e7e7a41f884f12d374ad489ea | 2021-06-23T06:45:18.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_large_code_documentation_generation_java_multitask_finetune | 179 | null | transformers | 3,752 | ---
tags:
- summarization
widget:
- text: "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"
---
# CodeTrans model for code documentation generation java
Pretrained model on programming language java using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the java function/method.
## Intended uses & limitations
The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_java_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_java_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/java/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
huggingface-course/marian-finetuned-kde4-en-to-fr | b62de7715951556628f8d9c632f95458e98c2010 | 2021-11-11T17:45:32.000Z | [
"pytorch",
"tf",
"tensorboard",
"marian",
"text2text-generation",
"dataset:kde4",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | huggingface-course | null | huggingface-course/marian-finetuned-kde4-en-to-fr | 179 | null | transformers | 3,753 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: test-marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.94161337775576
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8559
- Bleu: 52.9416
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
sismetanin/xlm_roberta_large-ru-sentiment-rusentiment | 30e2af4eba27e79d741688ff4e4c5a607dac93f2 | 2021-02-25T23:57:27.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"ru",
"transformers",
"sentiment analysis",
"Russian"
] | text-classification | false | sismetanin | null | sismetanin/xlm_roberta_large-ru-sentiment-rusentiment | 179 | 1 | transformers | 3,754 | ---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## XML-RoBERTa-Large-ru-sentiment-RuSentiment
XML-RoBERTa-Large-ru-sentiment-RuSentiment is a [XML-RoBERTa-Large](https://huggingface.co/xlm-roberta-large) model fine-tuned on [RuSentiment dataset](https://github.com/text-machine-lab/rusentiment) of general-domain Russian-language posts from the largest Russian social network, VKontakte.
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>wighted</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{Smetanin2021Deep,
author = {Sergey Smetanin and Mikhail Komarov},
title = {Deep transfer learning baselines for sentiment analysis in Russian},
journal = {Information Processing & Management},
volume = {58},
number = {3},
pages = {102484},
year = {2021},
issn = {0306-4573},
doi = {0.1016/j.ipm.2020.102484}
}
```
Dataset:
```
@inproceedings{rogers2018rusentiment,
title={RuSentiment: An enriched sentiment analysis dataset for social media in Russian},
author={Rogers, Anna and Romanov, Alexey and Rumshisky, Anna and Volkova, Svitlana and Gronas, Mikhail and Gribov, Alex},
booktitle={Proceedings of the 27th international conference on computational linguistics},
pages={755--763},
year={2018}
}
``` |
textattack/albert-base-v2-SST-2 | 96d7dedb92b3679c4f1ae69e7e77440d058d8602 | 2020-07-06T16:32:15.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/albert-base-v2-SST-2 | 179 | null | transformers | 3,755 | ## TextAttack Model Card
This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 3e-05, and a maximum sequence length of 64.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9254587155963303, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
jirmauritz/bert-multilingual-emoji | 4ef7879bcac5b81f4a941af9638b088e32ccb6e4 | 2021-06-28T13:43:26.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | jirmauritz | null | jirmauritz/bert-multilingual-emoji | 178 | null | transformers | 3,756 | ---
language: multilingual
license: apache-2.0
datasets:
- wikipedia
---
# BERT multilingual base model (cased)
Pretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective.
It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is case sensitive: it makes a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the languages in the training set that can then be used to
extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a
standard classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-multilingual-cased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] Hello I'm a model model. [SEP]",
'score': 0.10182085633277893,
'token': 13192,
'token_str': 'model'},
{'sequence': "[CLS] Hello I'm a world model. [SEP]",
'score': 0.052126359194517136,
'token': 11356,
'token_str': 'world'},
{'sequence': "[CLS] Hello I'm a data model. [SEP]",
'score': 0.048930276185274124,
'token': 11165,
'token_str': 'data'},
{'sequence': "[CLS] Hello I'm a flight model. [SEP]",
'score': 0.02036019042134285,
'token': 23578,
'token_str': 'flight'},
{'sequence': "[CLS] Hello I'm a business model. [SEP]",
'score': 0.020079681649804115,
'token': 14155,
'token_str': 'business'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
model = BertModel.from_pretrained("bert-base-multilingual-cased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
model = TFBertModel.from_pretrained("bert-base-multilingual-cased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on the 104 languages with the largest Wikipedias. You can find the complete list
[here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a
larger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese,
Japanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character.
The inputs of the model are then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
microsoft/beit-large-patch16-224 | 0bd443cfdfa82333978cac2253da417b33ff5018 | 2022-01-28T10:19:16.000Z | [
"pytorch",
"jax",
"beit",
"image-classification",
"dataset:imagenet",
"dataset:imagenet-21k",
"arxiv:2106.08254",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | microsoft | null | microsoft/beit-large-patch16-224 | 178 | null | transformers | 3,757 | ---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
- imagenet-21k
---
# BEiT (large-sized model, fine-tuned on ImageNet-1k)
BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit).
Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches.
Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import BeitFeatureExtractor, BeitForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-large-patch16-224')
model = BeitForImageClassification.from_pretrained('microsoft/beit-large-patch16-224')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py).
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
### Pretraining
For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254).
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```@article{DBLP:journals/corr/abs-2106-08254,
author = {Hangbo Bao and
Li Dong and
Furu Wei},
title = {BEiT: {BERT} Pre-Training of Image Transformers},
journal = {CoRR},
volume = {abs/2106.08254},
year = {2021},
url = {https://arxiv.org/abs/2106.08254},
archivePrefix = {arXiv},
eprint = {2106.08254},
timestamp = {Tue, 29 Jun 2021 16:55:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
``` |
pedrobaiainin/DialoGPT-small-harrypotter | d815a8618b4759f47704020670997f3261ce3efe | 2022-05-12T22:18:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | pedrobaiainin | null | pedrobaiainin/DialoGPT-small-harrypotter | 178 | null | transformers | 3,758 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
cambridgeltl/trans-encoder-cross-simcse-roberta-base | 7e07c5daa82e8407d2fcb435a7360ea0033b1990 | 2021-11-26T18:22:19.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | cambridgeltl | null | cambridgeltl/trans-encoder-cross-simcse-roberta-base | 177 | null | transformers | 3,759 | Entry not found |
cardiffnlp/twitter-roberta-base-2019-90m | b28ca2617bbb48f241d88bcadeafd641d4ef62a3 | 2022-02-09T11:11:16.000Z | [
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.03829",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cardiffnlp | null | cardiffnlp/twitter-roberta-base-2019-90m | 177 | null | transformers | 3,760 | # Twitter 2021 90M (RoBERTa-base)
This is a RoBERTa-base model trained on 90M tweets until the end of 2019.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).
For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-2019-90m"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def print_candidates():
for i in range(5):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
print_candidates()
```
Output:
```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.28870 getting
2) 0.28611 not
3) 0.15485 fully
4) 0.07357 self
5) 0.01812 being
------------------------------
I keep forgetting to bring a <mask>.
1) 0.12194 book
2) 0.04396 pillow
3) 0.04202 bag
4) 0.03038 wallet
5) 0.02729 charger
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.65505 End
2) 0.19230 The
3) 0.03856 the
4) 0.01223 end
5) 0.00978 this
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text):
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
return features_mean
MODEL = "cardiffnlp/twitter-roberta-base-2019-90m"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output:
```
Most similar to: The book was awesome
------------------------------
1) 0.99078 The movie was great
2) 0.96701 Just finished reading 'Embeddings in NLP'
3) 0.96037 I just ordered fried chicken 🐣
4) 0.95919 What time is the next game?
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-2019-90m"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
``` |
lysandre/arxiv-nlp | 894a9adde21d9a3e3843e6d5aeaaf01875c7fade | 2021-05-23T08:42:23.000Z | [
"pytorch",
"jax",
"gpt2",
"transformers"
] | null | false | lysandre | null | lysandre/arxiv-nlp | 177 | null | transformers | 3,761 | # ArXiv-NLP GPT-2 checkpoint
This is a GPT-2 small checkpoint for PyTorch. It is the official `gpt2-small` fine-tuned to ArXiv paper on the computational linguistics field.
## Training data
This model was trained on a subset of ArXiv papers that were parsed from PDF to txt. The resulting data is made of 80MB of text from the computational linguistics (cs.CL) field. |
voidful/context-only-question-generator | cc10aa73ab4cc0ef15e91403b7efabdb05872c9c | 2021-12-09T12:43:26.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:unifiedQA",
"transformers",
"question",
"generation",
"seq2seq",
"autotrain_compatible"
] | text2text-generation | false | voidful | null | voidful/context-only-question-generator | 177 | 1 | transformers | 3,762 | ---
language: en
tags:
- bart
- question
- generation
- seq2seq
datasets:
- unifiedQA
metrics:
- bleu
- rouge
pipeline_tag: text2text-generation
widget:
- text: "Harry Potter is a series of seven fantasy novels written by British author J. K. Rowling. The novels chronicle the lives of a young wizard, Harry Potter, and his friends Hermione Granger and Ron Weasley, all of whom are students at Hogwarts School of Witchcraft and Wizardry. The main story arc concerns Harry's struggle against Lord Voldemort, a dark wizard who intends to become immortal, overthrow the wizard governing body known as the Ministry of Magic and subjugate all wizards and Muggles(non-magical people)."
---
# context-only-question-generator
## Model description
This model is a sequence-to-sequence question generator which takes context as an input, and generates a question as an output.
It is based on a pretrained `bart-base` model.
#### How to use
Inputs should be organised into the following format:
```
context
```
The input sequence can then be encoded and passed as the `input_ids` argument in the model's `generate()` method.
|
soleimanian/financial-roberta-large-sentiment | acdaa9e81863b9c0b2b44fa1083274effe237817 | 2022-05-31T16:52:46.000Z | [
"pytorch",
"roberta",
"text-classification",
"English",
"transformers",
"Sentiment",
"RoBERTa",
"Financial Statements",
"Accounting",
"Finance",
"Business",
"ESG",
"CSR Reports",
"Financial News",
"Earnings Call Transcripts",
"Sustainability",
"Corporate governance",
"license:apache-2.0"
] | text-classification | false | soleimanian | null | soleimanian/financial-roberta-large-sentiment | 177 | 1 | transformers | 3,763 | ---
license: apache-2.0
language:
- English
tags:
- text-classification
- Sentiment
- RoBERTa
- Financial Statements
- Accounting
- Finance
- Business
- ESG
- CSR Reports
- Financial News
- Earnings Call Transcripts
- Sustainability
- Corporate governance
---
<!DOCTYPE html>
<html>
<body>
<h1><b>Financial-RoBERTa</b></h1>
<p><b>Financial-RoBERTa</b> is a pre-trained NLP model to analyze sentiment of financial text including:</p>
<ul style="PADDING-LEFT: 40px">
<li>Financial Statements,</li>
<li>Earnings Announcements,</li>
<li>Earnings Call Transcripts,</li>
<li>Corporate Social Responsibility (CSR) Reports,</li>
<li>Environmental, Social, and Governance (ESG) News,</li>
<li>Financial News,</li>
<li>Etc.</li>
</ul>
<p>Financial-RoBERTa is built by further training and fine-tuning the RoBERTa Large language model using a large corpus created from 10k, 10Q, 8K, Earnings Call Transcripts, CSR Reports, ESG News, and Financial News text.</p>
<p>The model will give softmax outputs for three labels: <b>Positive</b>, <b>Negative</b> or <b>Neutral</b>.</p>
<p><b>How to perform sentiment analysis:</b></p>
<p>The easiest way to use the model for single predictions is Hugging Face's sentiment analysis pipeline, which only needs a couple lines of code as shown in the following example:</p>
<pre>
<code>
from transformers import pipeline
sentiment_analysis = pipeline("sentiment-analysis",model="soleimanian/financial-roberta-large-sentiment")
print(sentiment_analysis("In fiscal 2021, we generated a net yield of approximately 4.19% on our investments, compared to approximately 5.10% in fiscal 2020."))
</code>
</pre>
<p>I provide an example script via <a href="https://colab.research.google.com/drive/11RGWU3UDtxnjan8Ug6dyX82m9fBV6CGo?usp=sharing" target="_blank">Google Colab</a>. You can load your data to a Google Drive and run the script for free on a Colab.
<p><b>Citation and contact:</b></p>
<p>Please cite <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4115943" target="_blank">this paper</a> when you use the model. Feel free to reach out to [email protected] with any questions or feedback you may have.<p/>
</body>
</html>
|
soProf1998/DialoGPT-medium-chattyrick | 919992ba164ab1ca64f71c5a9908f75cf239b852 | 2022-06-26T10:49:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | soProf1998 | null | soProf1998/DialoGPT-medium-chattyrick | 177 | 1 | transformers | 3,764 | ---
thumbnail: https://raw.githubusercontent.com/RuolinZheng08/twewy-discord-chatbot/main/gif-demo/icon.png
tags:
- conversational
license: mit
---
# DialoGPT Trained on the Speech of Rick from [The Show Rick & Morty]
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a character speech.
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("soProf1998/DialoGPT-medium-chattyrick")
model = AutoModelWithLMHead.from_pretrained("soProf1998/DialoGPT-medium-chattyrick")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("RickBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
Evelyn18/distilbert-base-uncased-becas-1 | 4b3346df8a5d11bef9e175d7c53e634a19959506 | 2022-07-02T02:44:08.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:becasv2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Evelyn18 | null | Evelyn18/distilbert-base-uncased-becas-1 | 177 | null | transformers | 3,765 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: distilbert-base-uncased-becas-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-becas-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 5.4379 |
| No log | 2.0 | 10 | 4.9216 |
| No log | 3.0 | 15 | 4.5533 |
| No log | 4.0 | 20 | 4.2022 |
| No log | 5.0 | 25 | 3.9714 |
| No log | 6.0 | 30 | 3.8209 |
| No log | 7.0 | 35 | 3.7916 |
| No log | 8.0 | 40 | 3.7497 |
| No log | 9.0 | 45 | 3.8372 |
| No log | 10.0 | 50 | 3.8655 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
hf-internal-testing/tiny-random-wavlm | 3cabe08e16f230d3c4fb7d5ac3e1207349c6751f | 2022-01-26T12:49:55.000Z | [
"pytorch",
"wavlm",
"audio-classification",
"transformers"
] | audio-classification | false | hf-internal-testing | null | hf-internal-testing/tiny-random-wavlm | 176 | null | transformers | 3,766 | Entry not found |
huggingtweets/drilbot_neo | cf026e12b01e82c9fafb1d342c352a0572a56279 | 2022-06-10T08:39:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/drilbot_neo | 176 | null | transformers | 3,767 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1374924360780242944/-Q8NfgEr_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wintbot_neo</div>
<div style="text-align: center; font-size: 14px;">@drilbot_neo</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wintbot_neo.
| Data | wintbot_neo |
| --- | --- |
| Tweets downloaded | 3243 |
| Retweets | 373 |
| Short tweets | 468 |
| Tweets kept | 2402 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/25adu2w7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @drilbot_neo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3keot8ku) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3keot8ku/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/drilbot_neo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
phueb/BabyBERTa-1 | 86e225d8c3c3bda86cf6bd587ba1f3a660d993be | 2022-01-18T14:44:02.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:CHILDES",
"transformers",
"BabyBERTa",
"autotrain_compatible"
] | fill-mask | false | phueb | null | phueb/BabyBERTa-1 | 176 | null | transformers | 3,768 | ---
language: en
tags:
- BabyBERTa
datasets:
- CHILDES
widget:
- text: "Look here. What is that <mask> ?"
- text: "Do you like your <mask> ?"
---
## BabyBERTA
### Overview
BabyBERTa is a light-weight version of RoBERTa trained on 5M words of American-English child-directed input.
It is intended for language acquisition research, on a single desktop with a single GPU - no high-performance computing infrastructure needed.
The three provided models are randomly selected from 10 that were trained and reported in the paper.
## Loading the tokenizer
BabyBERTa was trained with `add_prefix_space=True`, so it will not work properly with the tokenizer defaults.
For instance, to load the tokenizer for BabyBERTa-1, load it as follows:
```python
tokenizer = RobertaTokenizerFast.from_pretrained("phueb/BabyBERTa-1",
add_prefix_space=True)
```
### Hyper-Parameters
See the paper for details.
All provided models were trained for 400K steps with a batch size of 16.
Importantly, BabyBERTa never predicts unmasked tokens during training - `unmask_prob` is set to zero.
### Performance
BabyBerta was developed for learning grammatical knowledge from child-directed input.
Its grammatical knowledge was evaluated using the [Zorro](https://github.com/phueb/Zorro) test suite.
The best model achieves an overall accuracy of 80.3,
comparable to RoBERTa-base, which achieves an overall accuracy of 82.6 on the latest version of Zorro (as of October, 2021).
Both values differ slightly from those reported in the [CoNLL 2021 paper](https://aclanthology.org/2021.conll-1.49/).
There are two reasons for this:
1. Performance of RoBERTa-base is slightly larger because the authors previously lower-cased all words in Zorro before evaluation.
Lower-casing of proper nouns is detrimental to RoBERTa-base because RoBERTa-base has likely been trained on proper nouns that are primarily title-cased.
In contrast, because BabyBERTa is not case-sensitive, its performance is not influenced by this change.
2. The latest version of Zorro no longer contains ambiguous content words such as "Spanish" which can be both a noun and an adjective.
this resulted in a small reduction in the performance of BabyBERTa.
Overall Accuracy on Zorro:
| Model Name | Accuracy (holistic scoring) | Accuracy (MLM-scoring) |
|----------------------------------------|------------------------------|------------|
| [BabyBERTa-1][link-BabyBERTa-1] | 80.3 | 79.9 |
| [BabyBERTa-2][link-BabyBERTa-2] | 78.6 | 78.2 |
| [BabyBERTa-3][link-BabyBERTa-3] | 74.5 | 78.1 |
### Additional Information
This model was trained by [Philip Huebner](https://philhuebner.com), currently at the [UIUC Language and Learning Lab](http://www.learninglanguagelab.org).
More info can be found [here](https://github.com/phueb/BabyBERTa).
[link-BabyBERTa-1]: https://huggingface.co/phueb/BabyBERTa-1
[link-BabyBERTa-2]: https://huggingface.co/phueb/BabyBERTa-2
[link-BabyBERTa-3]: https://huggingface.co/phueb/BabyBERTa-3
|
WENGSYX/CirBERTa-Chinese-Base | 8e99626274d926a909165b08444ac3011bf85b67 | 2022-04-14T14:27:04.000Z | [
"pytorch",
"deberta-v2",
"transformers"
] | null | false | WENGSYX | null | WENGSYX/CirBERTa-Chinese-Base | 176 | 3 | transformers | 3,769 | # CirBERTa
### Apply the Circular to the Pretraining Model
| 预训练模型 | 学习率 | batchsize | 设备 | 语料库 | 时间 | 优化器 |
| --------------------- | ------ | --------- | ------ | ------ | ---- | ------ |
| CirBERTa-Chinese-Base | 1e-5 | 256 | 10张3090+3张A100 | 200G | 2月 | AdamW |
使用通用语料(WuDao 200G) 进行无监督预训练
在多项中文理解任务上,CirBERTa-Base模型超过MacBERT-Chinese-Large/RoBERTa-Chinese-Large
### 加载与使用
依托于huggingface-transformers
```
from transformers import AutoTokenizer,AutoModel
tokenizer = AutoTokenizer.from_pretrained("WENGSYX/CirBERTa-Chinese-Base")
model = AutoModel.from_pretrained("WENGSYX/CirBERTa-Chinese-Base")
```
### 引用:
(暂时先引用这个,论文正在撰写...)
```
@misc{CirBERTa,
title={CirBERTa: Apply the Circular to the Pretraining Model},
author={Yixuan Weng},
howpublished={\url{https://github.com/WENGSYX/CirBERTa}},
year={2022}
}
```
|
CenIA/distillbert-base-spanish-uncased-finetuned-ner | 17496d7bf9d359c720cfd2913e9dd2816941adf2 | 2022-01-06T19:42:07.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | CenIA | null | CenIA/distillbert-base-spanish-uncased-finetuned-ner | 175 | null | transformers | 3,770 | Entry not found |
m3hrdadfi/wav2vec2-large-xlsr-icelandic | b69b134c43165dadb01ba83cfcadd84ad678938a | 2021-11-04T15:22:07.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"is",
"dataset:malromur",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | m3hrdadfi | null | m3hrdadfi/wav2vec2-large-xlsr-icelandic | 175 | null | transformers | 3,771 | ---
language: is
datasets:
- malromur
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
widget:
- example_title: Malromur sample 1608
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/resolve/main/sample1608.flac
- example_title: Malromur sample 3860
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/resolve/main/sample3860.flac
model-index:
- name: XLSR Wav2Vec2 Icelandic by Mehrdad Farahani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Malromur is
type: malromur
args: lt
metrics:
- name: Test WER
type: wer
value: 09.21
---
# Wav2Vec2-Large-XLSR-53-Icelandic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Icelandic using [Malromur](https://clarin.is/en/resources/malromur/). When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
**Requirements**
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
!pip install jiwer
!pip install num2words
```
**Normalizer**
```bash
# num2word packages
# Original source: https://github.com/savoirfairelinux/num2words
!mkdir -p ./num2words
!wget -O num2words/__init__.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/raw/main/num2words/__init__.py
!wget -O num2words/base.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/raw/main/num2words/base.py
!wget -O num2words/compat.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/raw/main/num2words/compat.py
!wget -O num2words/currency.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/raw/main/num2words/currency.py
!wget -O num2words/lang_EU.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/raw/main/num2words/lang_EU.py
!wget -O num2words/lang_IS.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/raw/main/num2words/lang_IS.py
!wget -O num2words/utils.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/raw/main/num2words/utils.py
# Malromur_test selected based on gender and age
!wget -O malromur_test.csv https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/raw/main/malromur_test.csv
# Normalizer
!wget -O normalizer.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-icelandic/raw/main/normalizer.py
```
**Prediction**
```python
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset
import numpy as np
import re
import string
import IPython.display as ipd
from normalizer import Normalizer
normalizer = Normalizer(lang="is")
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
return batch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-icelandic")
model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-icelandic").to(device)
dataset = load_dataset("csv", data_files={"test": "./malromur_test.csv"})["test"]
dataset = dataset.map(
normalizer,
fn_kwargs={"do_lastspace_removing": True, "text_key_name": "cleaned_sentence"},
remove_columns=list(set(dataset.column_names) - set(['cleaned_sentence', 'path']))
)
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict, batched=True, batch_size=8)
max_items = np.random.randint(0, len(result), 20).tolist()
for i in max_items:
reference, predicted = result["cleaned_sentence"][i], result["predicted"][i]
print("reference:", reference)
print("predicted:", predicted)
print('---')
```
**Output:**
```text
reference: eða eitthvað annað dýr
predicted: eða eitthvað annað dýr
---
reference: oddgerður
predicted: oddgerður
---
reference: eiðný
predicted: eiðný
---
reference: löndum
predicted: löndum
---
reference: tileinkaði bróður sínum markið
predicted: tileinkaði bróður sínum markið
---
reference: þetta er svo mikill hégómi
predicted: þetta er svo mikill hégómi
---
reference: timarit is
predicted: timarit is
---
reference: stefna strax upp aftur
predicted: stefna strax upp aftur
---
reference: brekkuflöt
predicted: brekkuflöt
---
reference: áætlunarferð frestað vegna veðurs
predicted: áætluna ferð frestað vegna veðurs
---
reference: sagði af sér vegna kláms
predicted: sagði af sér vegni kláms
---
reference: grímúlfur
predicted: grímúlgur
---
reference: lýsti sig saklausan
predicted: lýsti sig saklausan
---
reference: belgingur is
predicted: belgingur is
---
reference: sambía
predicted: sambía
---
reference: geirastöðum
predicted: geirastöðum
---
reference: varð tvisvar fyrir eigin bíl
predicted: var tvisvar fyrir eigin bíl
---
reference: reykjavöllum
predicted: reykjavöllum
---
reference: miklir menn eru þeir þremenningar
predicted: miklir menn eru þeir þremenningar
---
reference: handverkoghonnun is
predicted: handverkoghonnun is
---
```
## Evaluation
The model can be evaluated as follows on the test data of Malromur.
```python
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset, load_metric
import numpy as np
import re
import string
from normalizer import Normalizer
normalizer = Normalizer(lang="is")
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
return batch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-icelandic")
model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-icelandic").to(device)
dataset = load_dataset("csv", data_files={"test": "./malromur_test.csv"})["test"]
dataset = dataset.map(
normalizer,
fn_kwargs={"do_lastspace_removing": True, "text_key_name": "cleaned_sentence"},
remove_columns=list(set(dataset.column_names) - set(['cleaned_sentence', 'path']))
)
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict, batched=True, batch_size=8)
wer = load_metric("wer")
print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["cleaned_sentence"])))
```
**Test Result**:
- WER: 09.21%
## Training & Report
The Common Voice `train`, `validation` datasets were used for training.
You can see the training states [here](https://wandb.ai/m3hrdadfi/wav2vec2_large_xlsr_is/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-Icelandic--Vmlldzo2Mjk3ODc?accessToken=j7neoz71mce1fkzt0bch4j0l50witnmme07xe90nvs769kjjtbwneu2wfz3oip16)
The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Icelandic_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb)
## Questions?
Post a Github issue on the [Wav2Vec](https://github.com/m3hrdadfi/wav2vec) repo. |
ml6team/mbart-large-cc25-cnn-dailymail-xsum-nl | c64279de2b23d3b081cd4c44edf45dc090d02a03 | 2022-05-16T11:41:07.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"nl",
"dataset:ml6team/cnn_dailymail_nl",
"dataset:ml6team/xsum_nl",
"transformers",
"bart",
"summarization",
"autotrain_compatible"
] | summarization | false | ml6team | null | ml6team/mbart-large-cc25-cnn-dailymail-xsum-nl | 175 | 2 | transformers | 3,772 | ---
language:
- nl
tags:
- mbart
- bart
- summarization
datasets:
- ml6team/cnn_dailymail_nl
- ml6team/xsum_nl
pipeline_tag: summarization
widget:
- text: 'Het jongetje werd eind april met zwaar letsel naar het ziekenhuis gebracht in Maastricht. Drie weken later overleed het kindje als gevolg van het letsel. Onderzoek moet nog uitwijzen wat voor verwondingen de baby precies had en hoe hij gewond is geraakt. Daarnaast doet de politie onderzoek in de woning van de ouders. Het is nog niet duidelijk wanneer de onderzoeken zijn afgerond, meldt 1Limburg. De verdachten zitten in beperkingen en mogen alleen contact hebben met hun advocaat.'
- text: 'Volgens De Vries gaat het om "de hoogste beloning die ooit is uitgeloofd in Nederland". De stichting heeft een website waar donateurs geld kunnen storten, schrijft NH Nieuws. Volgens De Vries is dit initiatief ook bedoeld voor andere zaken waar beloningen voor een gouden tip worden uitgereikt. "Het is dus niet eenmalig", aldus De Vries. Het is de eerste keer dat zoiets wordt opgezet, stelt hij: De 18-jarige Tanja Groen verdween spoorloos tijdens de ontgroeningsweek van de Universiteit Maastricht in augustus 1993. Ze werd voor het laatst gezien nadat ze was vertrokken van een feestje. De studente zou vandaag 46 jaar zijn geworden. Ook de ouders van Groen waren op de persconferentie aanwezig. "Het is vandaag de verjaardag van Tanja Groen, die haar ouders al 27 jaar niet meer hebben kunnen vieren, omdat zij eind augustus 1993 spoorloos is verdwenen", zei De Vries. "Haar ouders zitten in tergende onzekerheid. Ze geloven dat ze niet meer leeft. Maar die ene promille vreet aan ze. Ze hebben recht op duidelijkheid. Ze komen op leeftijd. Grootste angst is nooit te weten wat er met hun kind is gebeurd." De Vries wil dat het miljoen binnen een jaar is ingezameld. Als het bedrag na een jaar lager uitkomt, dan is dat de uit te loven beloning. Is het meer, dan zal de rest van het geld gebruikt worden in beloningen in andere zaken. Het initiatief wordt gesteund door de politie en justitie. De afgelopen jaren is er vaker uitgebreid naar sporen van Tanja Groen gezocht, maar die zoekacties hebben niets concreets opgeleverd. Vorige week werd opnieuw naar de vrouw gezocht, op de Strabrechtse Heide in Noord-Brabant. Ook die zoektocht leverde niets op.'
---
# mbart-large-cc25-cnn-dailymail-xsum-nl
## Model description
Finetuned version of [mbart](https://huggingface.co/facebook/mbart-large-cc25). We also wrote a **blog post** about this model [here](https://blog.ml6.eu/why-we-open-sourced-two-dutch-summarization-datasets-1047445abc97)
## Intended uses & limitations
It's meant for summarizing Dutch news articles.
#### How to use
```python
import transformers
undisputed_best_model = transformers.MBartForConditionalGeneration.from_pretrained(
"ml6team/mbart-large-cc25-cnn-dailymail-xsum-nl"
)
tokenizer = transformers.MBartTokenizer.from_pretrained("facebook/mbart-large-cc25")
summarization_pipeline = transformers.pipeline(
task="summarization",
model=undisputed_best_model,
tokenizer=tokenizer,
)
summarization_pipeline.model.config.decoder_start_token_id = tokenizer.lang_code_to_id[
"nl_XX"
]
article = "Kan je dit even samenvatten alsjeblief." # Dutch
summarization_pipeline(
article,
do_sample=True,
top_p=0.75,
top_k=50,
min_length=50,
early_stopping=True,
truncation=True,
)[0]["summary_text"]
```
## Training data
Finetuned [mbart](https://huggingface.co/facebook/mbart-large-cc25) with [this dataset](https://huggingface.co/datasets/ml6team/cnn_dailymail_nl) and [this dataset](https://huggingface.co/datasets/ml6team/xsum_nl)
|
moussaKam/barthez-sentiment-classification | adba67e0571033563349a3758a0459d44653331c | 2021-11-15T13:02:33.000Z | [
"pytorch",
"mbart",
"text-classification",
"fr",
"arxiv:2010.12321",
"transformers",
"bart",
"license:apache-2.0"
] | text-classification | false | moussaKam | null | moussaKam/barthez-sentiment-classification | 175 | 1 | transformers | 3,773 | ---
tags:
- text-classification
- bart
language:
- fr
license: apache-2.0
widget:
- text: Barthez est le meilleur gardien du monde.
---
### Barthez model finetuned on opinion classification task.
paper: https://arxiv.org/abs/2010.12321 \
github: https://github.com/moussaKam/BARThez
```
@article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
```
|
StanfordAIMI/RadBERT | ce3f7a29afd4f0c5c88c89672c52a8dd7cbdbb5c | 2022-05-07T23:22:33.000Z | [
"pytorch",
"bert",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookscorpus",
"dataset:pubmed",
"dataset:radreports",
"transformers",
"biobert",
"radbert",
"language-model",
"uncased",
"radiology",
"biomedical",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | StanfordAIMI | null | StanfordAIMI/RadBERT | 175 | 5 | transformers | 3,774 | ---
widget:
- text: "low lung volumes, [MASK] pulmonary vascularity."
tags:
- fill-mask
- pytorch
- transformers
- bert
- biobert
- radbert
- language-model
- uncased
- radiology
- biomedical
datasets:
- wikipedia
- bookscorpus
- pubmed
- radreports
language:
- en
license: mit
---
RadBERT was continuously pre-trained on radiology reports from a BioBERT initialization. Manuscript in proceedings. |
EMBO/BioMegatron345mCased | 3b82e192316436e689c58d24bb55ef6223953b64 | 2022-05-31T13:24:48.000Z | [
"pytorch",
"megatron-bert",
"english",
"arxiv:2010.06060",
"transformers",
"language model",
"license:cc-by-4.0"
] | null | false | EMBO | null | EMBO/BioMegatron345mCased | 175 | null | transformers | 3,775 | ---
license: cc-by-4.0
language:
- english
thumbnail:
tags:
- language model
---
!---
# ##############################################################################################
#
# This model has been uploaded to HuggingFace by https://huggingface.co/drAbreu
# The model is based on the NVIDIA checkpoint located at
# https://catalog.ngc.nvidia.com/orgs/nvidia/models/biomegatron345mcased
#
# ##############################################################################################
-->
[BioMegatron](https://arxiv.org/pdf/2010.06060.pdf) is a transformer developed by the Applied Deep Learning Research team at NVIDIA. This particular Megatron model trained on top of the Megatron-LM model, adding a PubMed corpusto the Megatron-LM corpora(Wikipedia, RealNews, OpenWebText, and CC-Stories). BioMegatron follows a similar (albeit not identical) architecture as BERT and it has 345 million parameters:
* 24 layers
* 16 attention heads with a hidden size of 1024.
More information available at [nVIDIA NGC CATALOG](https://catalog.ngc.nvidia.com/orgs/nvidia/models/biomegatron345mcased)
# Running BioMegatron in 🤗 transformers
In this implementation we have followed the commands of the [`nvidia/megatron-bert-uncased-345m`](https://huggingface.co/nvidia/megatron-bert-cased-345m) repository to make BioMegatron available in 🤗.
However, the file [`convert_megatron_bert_checkpoint.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py) needed a modification. The reason is that the Megatron model shown in [`nvidia/megatron-bert-uncased-345m`](https://huggingface.co/nvidia/megatron-bert-cased-345m) has included head layers, while the weights of the BioMegatron model that we upload to this repository do not contain a head.
The code below is a modification of the original [`convert_megatron_bert_checkpoint.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py).
```python
import os
import torch
from convert_biomegatron_checkpoint import convert_megatron_checkpoint
print_checkpoint_structure = True
path_to_checkpoint = "/path/to/BioMegatron345mUncased/"
# Extract the basename.
basename = os.path.dirname(path_to_checkpoint).split('/')[-1]
# Load the model.
input_state_dict = torch.load(os.path.join(path_to_checkpoint, 'model_optim_rng.pt'), map_location="cpu")
# Convert.
print("Converting")
output_state_dict, output_config = convert_megatron_checkpoint(input_state_dict, head_model=False)
# Print the structure of converted state dict.
if print_checkpoint_structure:
recursive_print(None, output_state_dict)
# Store the config to file.
output_config_file = os.path.join(path_to_checkpoint, "config.json")
print(f'Saving config to "{output_config_file}"')
with open(output_config_file, "w") as f:
json.dump(output_config, f)
# Store the state_dict to file.
output_checkpoint_file = os.path.join(path_to_checkpoint, "pytorch_model.bin")
print(f'Saving checkpoint to "{output_checkpoint_file}"')
torch.save(output_state_dict, output_checkpoint_file)
```
We provide in the repository an alternative version of the [python script](https://huggingface.co/EMBO/BioMegatron345mCased/blob/main/convert_biomegatron_checkpoint.py) in order to any user to cross-check the validity of the model replicated in this repository.
BioMegatron can be run with the standard 🤗 script for loading models. Here we show an example identical to that of [`nvidia/megatron-bert-uncased-345m`](https://huggingface.co/nvidia/megatron-bert-cased-345m).
```python
import os
import torch
from transformers import BertTokenizer, MegatronBertForMaskedLM, AutoModelForMaskedLM
checkpoint = "EMBO/BioMegatron345mCased"
# The tokenizer. Megatron was trained with standard tokenizer(s).
tokenizer = BertTokenizer.from_pretrained(checkpoint)
# Load the model from $MYDIR/nvidia/megatron-bert-uncased-345m.
model = AutoModelForMaskedLM.from_pretrained(checkpoint)
device = torch.device("cpu")
# Create inputs (from the BERT example page).
input = tokenizer("The capital of France is [MASK]", return_tensors="pt").to(device)
label = tokenizer("The capital of France is Paris", return_tensors="pt")["input_ids"].to(device)
# Run the model.
with torch.no_grad():
output = model(**input, labels=label)
print(output)
```
# Limitations
This implementation has not been fine-tuned in any task. It has only the weights of the official nVIDIA checkpoint. It needs to be trained to perform any downstream task.
# Original code
The original code for Megatron can be found here: [https://github.com/NVIDIA/Megatron-LM](https://github.com/NVIDIA/Megatron-LM).
|
Bhuvana/t5-base-spellchecker | 02a2e8005ea4ad2c6ad3f1d59c92f4db804989eb | 2022-01-04T12:46:55.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Bhuvana | null | Bhuvana/t5-base-spellchecker | 174 | null | transformers | 3,776 | ---
widget:
- text: "christmas is celbrated on decembr 25 evry ear"
---
# Spell checker using T5 base transformer
A simple spell checker built using T5-Base transformer. To use this model
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Bhuvana/t5-base-spellchecker")
model = AutoModelForSeq2SeqLM.from_pretrained("Bhuvana/t5-base-spellchecker")
def correct(inputs):
input_ids = tokenizer.encode(inputs,return_tensors='pt')
sample_output = model.generate(
input_ids,
do_sample=True,
max_length=50,
top_p=0.99,
num_return_sequences=1
)
res = tokenizer.decode(sample_output[0], skip_special_tokens=True)
return res
text = "christmas is celbrated on decembr 25 evry ear"
print(correct(text))
```
This should print the corrected statement
```
christmas is celebrated on december 25 every year
```
You can also type the text under the Hosted inference API and get predictions online.
|
Geotrend/distilbert-base-zh-cased | c656a076c3ed58dfb48db7befc555feed5c4dc82 | 2021-08-16T13:15:12.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"zh",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-zh-cased | 174 | null | transformers | 3,777 | ---
language: zh
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-zh-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-zh-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-zh-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Guscode/DKbert-hatespeech-detection | edbbde93f7c0eced542a84ca47ab9dbb74b58605 | 2021-09-22T07:55:16.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"da",
"dataset:DKHate - OffensEval2020",
"transformers",
"Hatespeech",
"Danish",
"BERT",
"license:mit"
] | text-classification | false | Guscode | null | Guscode/DKbert-hatespeech-detection | 174 | 1 | transformers | 3,778 | ---
language:
- da
tags:
- Hatespeech
- Danish
- BERT
license: mit
datasets:
- DKHate - OffensEval2020
Classes:
- Hateful
- Not Hateful
---
# DKbert-hatespeech-classification
Use this model to detect hatespeech in Danish. For details, guide and command line tool see [DK hate github](https://github.com/Guscode/DKbert-hatespeech-detection)
## Training data
Training data is from OffensEval2020 which can be found [here]( https://figshare.com/articles/dataset/Danish_Hate_Speech_Abusive_Language_data/12220805)
## Performance
The model achieves a macro F1-score of 0.78
Precision hateful: 0.77
Recall hateful: 0.49
See more on [DK hate github](https://github.com/Guscode/DKbert-hatespeech-detection)
## Training procedure
- [BOTXO Nordic Bert](https://huggingface.co/DJSammy/bert-base-danish-uncased_BotXO,ai)
- Learning rate: 1e-5,
- Batch size: 16
- Max sequence length: 128
## Project information
This model was made in collaboration between [Johan Horsmans](https://github.com/JohanHorsmans) and [Gustav Aarup Lauridsen](https://github.com/Guscode) for their Cultural Data Science Exam.
|
jonatasgrosman/wav2vec2-xls-r-1b-french | 1ad5b48188ef40418935fff76f992178217941c8 | 2022-07-27T23:39:10.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/wav2vec2-xls-r-1b-french | 174 | 4 | transformers | 3,779 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R Wav2Vec2 French by Jonatas Grosman
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: fr
metrics:
- name: Test WER
type: wer
value: 16.85
- name: Test CER
type: cer
value: 4.66
- name: Test WER (+LM)
type: wer
value: 16.32
- name: Test CER (+LM)
type: cer
value: 4.21
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: fr
metrics:
- name: Dev WER
type: wer
value: 22.34
- name: Dev CER
type: cer
value: 9.88
- name: Dev WER (+LM)
type: wer
value: 17.16
- name: Dev CER (+LM)
type: cer
value: 9.38
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: fr
metrics:
- name: Test WER
type: wer
value: 19.15
---
# Fine-tuned XLS-R 1B model for speech recognition in French
Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on French using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [MediaSpeech](https://www.openslr.org/108/), [Multilingual TEDx](http://www.openslr.org/100), [Multilingual LibriSpeech](https://www.openslr.org/94/), and [Voxpopuli](https://github.com/facebookresearch/voxpopuli).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool, and thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
## Usage
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-xls-r-1b-french")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "fr"
MODEL_ID = "jonatasgrosman/wav2vec2-xls-r-1b-french"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
```
## Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-french --dataset mozilla-foundation/common_voice_8_0 --config fr --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-french --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr-1b-french,
title={Fine-tuned {XLS-R} 1{B} model for speech recognition in {F}rench},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-french}},
year={2022}
}
```
|
Inari/deberta-v3-large-snli_mnli_fever_anli_R1_R2_R3-nli | ca16637efa5f33a76416b864af7594d5221ed025 | 2022-04-25T13:37:15.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"en",
"dataset:snli-1.0",
"dataset:multi-nli-1.0",
"dataset:nli-fever",
"dataset:anli-v1.0",
"transformers"
] | text-classification | false | Inari | null | Inari/deberta-v3-large-snli_mnli_fever_anli_R1_R2_R3-nli | 174 | null | transformers | 3,780 | ---
language:
- en
tags:
- text-classification
metrics:
- accuracy
datasets:
- snli-1.0
- multi-nli-1.0
- nli-fever
- anli-v1.0
widget:
- text: "British mountaineer Alison Hargreaves becomes the first woman to climb Mount Everest alone and without oxygen tanks. [SEP] Alison is a female."
- text: "Mr Lopez Obrador has alleged electoral fraud cost him the presidency, despite a recount confirming Felipe Calderon as Mexico's president-elect. [SEP] Mr Lopez Obrador was born in mexico."
---
## deberta-v3-large-snli_mnli_fever_anli_R1_R2_R3-nli
#### Datasets
This model was trained on the snli-v1.0, multi-nli-1.0, nli-fever and anli-1.0-r1/anli-1.0-r2/anli-1.0-r3 datasets with the training weights of 1,1,1,10,20,10 respectively.
The training codes are mostly referenced from: https://github.com/facebookresearch/anli
#### Hyperparameters
learning_rate: 1e-5
max_length: 156
batch_size: 16
warmup_ratio: 0.1
weight_decay: 0.0
num_epochs: 2
#### Dev results
snli-v1.0 | multi-nli-1.0-m | multi-nli-1.0-mm | anli-1.0-r1 | anli-1.0-r2 | anli-1.0-r3
----------|-----------------|------------------|-------------|-------------|------------
0.938 | 0.914 | 0.912 | 0.796 | 0.627 | 0.610
#### Test results
snli-v1.0 | anli-1.0-r1 | anli-1.0-r2 | anli-1.0-r3
-----------|-------------|-------------|------------
0.929 | 0.775 | 0.636 | 0.612 |
reso/DialoGPT-medium-v3ga | d551a467455d619f3414c7e6ffed976a9d75bc86 | 2022-07-04T19:39:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | reso | null | reso/DialoGPT-medium-v3ga | 174 | null | transformers | 3,781 | ---
thumbnail: https://raw.githubusercontent.com/RuolinZheng08/twewy-discord-chatbot/main/gif-demo/icon.png
tags:
- conversational
license: mit
---
# DialoGPT Trained on the Speech of a Game Character
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script).
I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot)
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
Helsinki-NLP/opus-mt-mos-en | 31c4672f10b25f8f252e5df197e889866e9d0799 | 2021-09-10T13:58:16.000Z | [
"pytorch",
"marian",
"text2text-generation",
"mos",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-mos-en | 173 | null | transformers | 3,782 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-mos-en
* source languages: mos
* target languages: en
* OPUS readme: [mos-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mos-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/mos-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mos-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mos-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mos.en | 26.1 | 0.408 |
|
Nhut/wav2vec2-large-xlsr-french | b20d283faaa21db8638c577f1f9d9ee6d2ebd157 | 2021-07-05T16:25:03.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Nhut | null | Nhut/wav2vec2-large-xlsr-french | 173 | 1 | transformers | 3,783 | ---
language: fr
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: wav2vec2-large-xlsr-53-French by Nhut DOAN NGUYEN
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fr
type: common_voice
args: fr
metrics:
- name: Test WER
type: wer
value: xx.xx
---
# wav2vec2-large-xlsr-53-french
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in French using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fr", split="test[:20%]")
processor = Wav2Vec2Processor.from_pretrained("Nhut/wav2vec2-large-xlsr-french")
model = Wav2Vec2ForCTC.from_pretrained("Nhut/wav2vec2-large-xlsr-french")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the French test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fr")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Nhut/wav2vec2-large-xlsr-french")
model = Wav2Vec2ForCTC.from_pretrained("Nhut/wav2vec2-large-xlsr-french")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\â€\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 29.31 %
## Training
V1 of the Common Voice `train`, `validation` datasets were used for training.
## Testing
20% of V6.1 of the Common Voice `Test` dataset were used for training. |
asapp/sew-d-tiny-100k-ft-ls100h | 443b29018d4aa5af937b2d1ee75d965d63ddf595 | 2022-05-24T13:10:21.000Z | [
"pytorch",
"sew-d",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"transformers",
"audio",
"speech",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | asapp | null | asapp/sew-d-tiny-100k-ft-ls100h | 173 | 1 | transformers | 3,784 | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- speech
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: sew-d-tiny-100k-ft-ls100h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 10.47
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 22.73
---
# SEW-D-tiny
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, SEWDForCTC
from datasets import load_dataset
import soundfile as sf
import torch
# load the model and preprocessor
processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
model = SEWDForCTC.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
# load the dummy dataset with speech samples
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# preprocess
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **asapp/sew-d-tiny-100k-ft-ls100h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import SEWDForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = SEWDForCTC.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
def map_to_pred(batch):
input_values = processor(batch["audio"][0]["array"], sampling_rate=16000,
return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
| --- | --- |
| 10.47 | 22.73 |
|
cosmoquester/bart-ko-mini | 0dd647b1a8511ed034345004bb0f825a36b10b89 | 2021-08-28T04:59:29.000Z | [
"pytorch",
"tf",
"bart",
"text2text-generation",
"ko",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | cosmoquester | null | cosmoquester/bart-ko-mini | 173 | null | transformers | 3,785 | ---
language: ko
---
# Pretrained BART in Korean
This is pretrained BART model with multiple Korean Datasets.
I used multiple datasets for generalizing the model for both colloquial and written texts.
The training is supported by [TPU Research Cloud](https://sites.research.google/trc/) program.
The script which is used to pre-train model is [here](https://github.com/cosmoquester/transformers-bart-pretrain).
When you use the reference API, you must wrap the sentence with `[BOS]` and `[EOS]` like below example.
```
[BOS] 안녕하세요? 반가워요~~ [EOS]
```
You can also test mask filling performance using `[MASK]` token like this.
```
[BOS] [MASK] 먹었어? [EOS]
```
## Benchmark
<style>
table {
border-collapse: collapse;
border-style: hidden;
width: 100%;
}
td, th {
border: 1px solid #4d5562;
padding: 8px;
}
</style>
<table>
<tr>
<th>Dataset</th>
<td>KLUE NLI dev</th>
<td>NSMC test</td>
<td>QuestionPair test</td>
<td colspan="2">KLUE TC dev</td>
<td colspan="3">KLUE STS dev</td>
<td colspan="3">KorSTS dev</td>
<td colspan="2">HateSpeech dev</td>
</tr>
<tr>
<th>Metric</th>
<!-- KLUE NLI -->
<td>Acc</th>
<!-- NSMC -->
<td>Acc</td>
<!-- QuestionPair -->
<td>Acc</td>
<!-- KLUE TC -->
<td>Acc</td>
<td>F1</td>
<!-- KLUE STS -->
<td>F1</td>
<td>Pearson</td>
<td>Spearman</td>
<!-- KorSTS -->
<td>F1</td>
<td>Pearson</td>
<td>Spearman</td>
<!-- HateSpeech -->
<td>Bias Acc</td>
<td>Hate Acc</td>
</tr>
<tr>
<th>Score</th>
<!-- KLUE NLI -->
<td>0.5253</th>
<!-- NSMC -->
<td>0.8425</td>
<!-- QuestionPair -->
<td>0.8945</td>
<!-- KLUE TC -->
<td>0.8047</td>
<td>0.7988</td>
<!-- KLUE STS -->
<td>0.7411</td>
<td>0.7471</td>
<td>0.7399</td>
<!-- KorSTS -->
<td>0.7725</td>
<td>0.6503</td>
<td>0.6191</td>
<!-- HateSpeech -->
<td>0.7537</td>
<td>0.5605</td>
</tr>
</table>
- The performance was measured using [the notebooks here](https://github.com/cosmoquester/transformers-bart-finetune) with colab.
## Used Datasets
### [모두의 말뭉치](https://corpus.korean.go.kr/)
- 일상 대화 말뭉치 2020
- 구어 말뭉치
- 문어 말뭉치
- 신문 말뭉치
### AIhub
- [개방데이터 전문분야말뭉치](https://aihub.or.kr/aidata/30717)
- [개방데이터 한국어대화요약](https://aihub.or.kr/aidata/30714)
- [개방데이터 감성 대화 말뭉치](https://aihub.or.kr/aidata/7978)
- [개방데이터 한국어 음성](https://aihub.or.kr/aidata/105)
- [개방데이터 한국어 SNS](https://aihub.or.kr/aidata/30718)
### [세종 말뭉치](https://ithub.korean.go.kr/)
|
dhpollack/distilbert-dummy-sentiment | 459f7eb8f9f7e9f0090d37b04dc46fb3a5c987d7 | 2021-03-23T17:40:32.000Z | [
"pytorch",
"distilbert",
"text-classification",
"multilingual",
"en",
"transformers",
"sentiment-analysis",
"testing",
"unit tests"
] | text-classification | false | dhpollack | null | dhpollack/distilbert-dummy-sentiment | 173 | null | transformers | 3,786 | ---
language:
- "multilingual"
- "en"
tags:
- "sentiment-analysis"
- "testing"
- "unit tests"
---
# DistilBert Dummy Sentiment Model
## Purpose
This is a dummy model that can be used for testing the transformers `pipeline` with the task `sentiment-analysis`. It should always give random results (i.e. `{"label": "negative", "score": 0.5}`).
## How to use
```python
classifier = pipeline("sentiment-analysis", "dhpollack/distilbert-dummy-sentiment")
results = classifier(["this is a test", "another test"])
```
## Notes
This was created as follows:
1. Create a vocab.txt file (in /tmp/vocab.txt in this example).
```
[UNK]
[SEP]
[PAD]
[CLS]
[MASK]
```
2. Open a python shell:
```python
import transformers
config = transformers.DistilBertConfig(vocab_size=5, n_layers=1, n_heads=1, dim=1, hidden_dim=4 * 1, num_labels=2, id2label={0: "negative", 1: "positive"}, label2id={"negative": 0, "positive": 1})
model = transformers.DistilBertForSequenceClassification(config)
tokenizer = transformers.DistilBertTokenizer("/tmp/vocab.txt", model_max_length=512)
config.save_pretrained(".")
model.save_pretrained(".")
tokenizer.save_pretrained(".")
```
|
kornosk/bert-election2020-twitter-stance-trump-KE-MLM | 94546163d364120dc06da461b0f154ee69df6683 | 2022-05-02T22:58:49.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"transformers",
"twitter",
"stance-detection",
"election2020",
"politics",
"license:gpl-3.0"
] | text-classification | false | kornosk | null | kornosk/bert-election2020-twitter-stance-trump-KE-MLM | 173 | 1 | transformers | 3,787 | ---
language: "en"
tags:
- twitter
- stance-detection
- election2020
- politics
license: "gpl-3.0"
---
# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Donald Trump (KE-MLM)
Pre-trained weights for **KE-MLM model** in [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Training Data
This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our [stance-labeled data](https://github.com/GU-DataLab/stance-detection-KE-MLM) for stance detection towards Donald Trump.
# Training Objective
This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Donald Trump.
# Usage
This pre-trained language model is fine-tuned to the stance detection task specifically for Donald Trump.
Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import numpy as np
# choose GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# select mode path here
pretrained_LM_path = "kornosk/bert-election2020-twitter-stance-trump-KE-MLM"
# load model
tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path)
model = AutoModelForSequenceClassification.from_pretrained(pretrained_LM_path)
id2label = {
0: "AGAINST",
1: "FAVOR",
2: "NONE"
}
##### Prediction Neutral #####
sentence = "Hello World."
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
##### Prediction Favor #####
sentence = "Go Go Trump!!!"
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
##### Prediction Against #####
sentence = "Trump is the worst."
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
# please consider citing our paper if you feel this is useful :)
```
# Reference
- [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Citation
```bibtex
@inproceedings{kawintiranon2021knowledge,
title={Knowledge Enhanced Masked Language Model for Stance Detection},
author={Kawintiranon, Kornraphop and Singh, Lisa},
booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
year={2021},
publisher={Association for Computational Linguistics},
url={https://www.aclweb.org/anthology/2021.naacl-main.376}
}
``` |
pertschuk/albert-intent-model-v3 | a2cd0d5365e563c569eb4c0314caf7977312dcf2 | 2020-04-24T16:05:05.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | pertschuk | null | pertschuk/albert-intent-model-v3 | 173 | null | transformers | 3,788 | Entry not found |
yosemite/autonlp-imdb-sentiment-analysis-english-470512388 | 048ac53e79fb2eddd3a04b74f0981df4f414d013 | 2022-01-04T17:34:50.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:yosemite/autonlp-data-imdb-sentiment-analysis-english",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | yosemite | null | yosemite/autonlp-imdb-sentiment-analysis-english-470512388 | 173 | null | transformers | 3,789 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- yosemite/autonlp-data-imdb-sentiment-analysis-english
co2_eq_emissions: 256.38650494338367
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 470512388
- CO2 Emissions (in grams): 256.38650494338367
## Validation Metrics
- Loss: 0.18712733685970306
- Accuracy: 0.9388
- Precision: 0.9300274402195218
- Recall: 0.949
- AUC: 0.98323192
- F1: 0.9394179370421698
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/yosemite/autonlp-imdb-sentiment-analysis-english-470512388
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("yosemite/autonlp-imdb-sentiment-analysis-english-470512388", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("yosemite/autonlp-imdb-sentiment-analysis-english-470512388", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
Gunulhona/tbnlimodel_v1 | 94a6673d4d2d98dc6c3a553052e74b2c232a50b1 | 2022-07-30T08:57:14.000Z | [
"pytorch",
"bart",
"feature-extraction",
"transformers"
] | feature-extraction | false | Gunulhona | null | Gunulhona/tbnlimodel_v1 | 173 | null | transformers | 3,790 | Entry not found |
Davlan/bert-base-multilingual-cased-finetuned-swahili | 24be68534e4b27a44f1d4791fc1c39bc014863c4 | 2022-06-27T11:50:13.000Z | [
"pytorch",
"tf",
"bert",
"fill-mask",
"ha",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/bert-base-multilingual-cased-finetuned-swahili | 172 | 1 | transformers | 3,791 | Hugging Face's logo
---
language: ha
datasets:
---
# bert-base-multilingual-cased-finetuned-swahili
## Model description
**bert-base-multilingual-cased-finetuned-swahili** is a **Swahili BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Swahili language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Swahili corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-swahili')
>>> unmasker("Jumatatu, Bwana Kagame alielezea shirika la France24 huko [MASK] kwamba "hakuna uhalifu ulitendwa")
[{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Paris kwamba hakuna uhalifu ulitendwa',
'score': 0.31642526388168335,
'token': 10728,
'token_str': 'Paris'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Rwanda kwamba hakuna uhalifu ulitendwa',
'score': 0.15753623843193054,
'token': 57557,
'token_str': 'Rwanda'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Burundi kwamba hakuna uhalifu ulitendwa',
'score': 0.07211585342884064,
'token': 57824,
'token_str': 'Burundi'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko France kwamba hakuna uhalifu ulitendwa',
'score': 0.029844321310520172,
'token': 10688,
'token_str': 'France'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Senegal kwamba hakuna uhalifu ulitendwa',
'score': 0.0265930388122797,
'token': 38052,
'token_str': 'Senegal'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Swahili CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | sw_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 86.80 | 89.36
### BibTeX entry and citation info
By David Adelani
```
```
|
JP040/bert-german-sentiment-twitter | 491f7278ac030c56601b1021c6fc68454d57b3ca | 2021-05-18T21:14:38.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | JP040 | null | JP040/bert-german-sentiment-twitter | 172 | null | transformers | 3,792 | Entry not found |
Rostlab/prot_bert_bfd_ss3 | 058aa452532c34f803dca9b89c80a85417e59c17 | 2021-05-18T22:11:42.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Rostlab | null | Rostlab/prot_bert_bfd_ss3 | 172 | 1 | transformers | 3,793 | Entry not found |
Wikidepia/marian-nmt-enid | 1e36963d83e7c41b45972443158537f746e40729 | 2021-06-03T07:27:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Wikidepia | null | Wikidepia/marian-nmt-enid | 172 | null | transformers | 3,794 | # NMT Model for English-Indonesian
|
alenusch/rugpt2-paraphraser | 05d219ffa2b85b62f73f36037713299ee56de09c | 2021-05-21T12:47:57.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | alenusch | null | alenusch/rugpt2-paraphraser | 172 | null | transformers | 3,795 | Entry not found |
brandon25/deberta-base-finetuned-ner | 3ec3cbb582c8d490f9a7555ec7f6d8f2e24961a3 | 2021-10-12T08:05:37.000Z | [
"pytorch",
"tensorboard",
"deberta",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | brandon25 | null | brandon25/deberta-base-finetuned-ner | 172 | 1 | transformers | 3,796 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: deberta-base-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9563020492186769
- name: Recall
type: recall
value: 0.9652436720816018
- name: F1
type: f1
value: 0.9607520564042303
- name: Accuracy
type: accuracy
value: 0.9899205302077261
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-finetuned-ner
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0501
- Precision: 0.9563
- Recall: 0.9652
- F1: 0.9608
- Accuracy: 0.9899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1419 | 1.0 | 878 | 0.0628 | 0.9290 | 0.9288 | 0.9289 | 0.9835 |
| 0.0379 | 2.0 | 1756 | 0.0466 | 0.9456 | 0.9567 | 0.9511 | 0.9878 |
| 0.0176 | 3.0 | 2634 | 0.0473 | 0.9539 | 0.9575 | 0.9557 | 0.9890 |
| 0.0098 | 4.0 | 3512 | 0.0468 | 0.9570 | 0.9635 | 0.9603 | 0.9896 |
| 0.0043 | 5.0 | 4390 | 0.0501 | 0.9563 | 0.9652 | 0.9608 | 0.9899 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dmitry-vorobiev/rubert_ria_headlines | 9fda809a2528837e2d142439e52c78305a921e28 | 2021-09-22T08:20:24.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"ru",
"transformers",
"summarization",
"bert",
"rubert",
"license:mit",
"autotrain_compatible"
] | summarization | false | dmitry-vorobiev | null | dmitry-vorobiev/rubert_ria_headlines | 172 | null | transformers | 3,797 | ---
language:
- ru
tags:
- summarization
- bert
- rubert
license: mit
---
# rubert_ria_headlines
## Description
*bert2bert* model, initialized with the `DeepPavlov/rubert-base-cased` pretrained weights and
fine-tuned on the first 99% of ["Rossiya Segodnya" news dataset](https://github.com/RossiyaSegodnya/ria_news_dataset) for 2 epochs.
## Usage example
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
MODEL_NAME = "dmitry-vorobiev/rubert_ria_headlines"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_NAME)
text = "Скопируйте текст статьи / новости"
encoded_batch = tokenizer.prepare_seq2seq_batch(
[text],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512)
output_ids = model.generate(
input_ids=encoded_batch["input_ids"],
max_length=36,
no_repeat_ngram_size=3,
num_beams=5,
top_k=0
)
headline = tokenizer.decode(output_ids[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=False)
print(headline)
```
## Datasets
- [ria_news](https://github.com/RossiyaSegodnya/ria_news_dataset)
## How it was trained?
I used free TPUv3 on kaggle. The model was trained for 3 epochs with effective batch size 192 and soft restarts (warmup steps 1500 / 500 / 500 with new optimizer state on each epoch start).
- [1 epoch notebook](https://www.kaggle.com/dvorobiev/try-train-seq2seq-ria-tpu?scriptVersionId=53254694)
- [2 epoch notebook](https://www.kaggle.com/dvorobiev/try-train-seq2seq-ria-tpu?scriptVersionId=53269040)
- [3 epoch notebook](https://www.kaggle.com/dvorobiev/try-train-seq2seq-ria-tpu?scriptVersionId=53280797)
Common train params:
```shell
export XLA_USE_BF16=1
export XLA_TENSOR_ALLOCATOR_MAXSIZE=100000000
python nlp_headline_rus/src/train_seq2seq.py \
--do_train \
--tie_encoder_decoder \
--max_source_length 512 \
--max_target_length 32 \
--val_max_target_length 48 \
--tpu_num_cores 8 \
--per_device_train_batch_size 24 \
--gradient_accumulation_steps 1 \
--learning_rate 5e-4 \
--adam_epsilon 1e-6 \
--weight_decay 1e-5 \
```
## Validation results
- Using [last 1% of ria](https://drive.google.com/drive/folders/1ztAeyb1BiLMgXwOgOJS7WMR4PGiI1q92) dataset
- Using [gazeta_ru test](https://drive.google.com/drive/folders/1CyowuRpecsLTcDbqEfmAvkCWOod58g_e) split
- Using [gazeta_ru val](https://drive.google.com/drive/folders/1XZFOXHSXLKdhzm61ceVLw3aautrdskIu) split |
facebook/vit-mae-huge | 5e0e60b29318a30e9ed13e27cb56d28071704980 | 2022-03-29T16:39:28.000Z | [
"pytorch",
"tf",
"vit_mae",
"pretraining",
"dataset:imagenet-1k",
"arxiv:2111.06377",
"transformers",
"vision",
"license:apache-2.0"
] | null | false | facebook | null | facebook/vit-mae-huge | 172 | 1 | transformers | 3,798 | ---
license: apache-2.0
tags:
- vision
datasets:
- imagenet-1k
---
# Vision Transformer (huge-sized model) pre-trained with MAE
Vision Transformer (ViT) model pre-trained using the MAE method. It was introduced in the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick and first released in [this repository](https://github.com/facebookresearch/mae).
Disclaimer: The team releasing MAE did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like). Images are presented to the model as a sequence of fixed-size patches.
During pre-training, one randomly masks out a high portion (75%) of the image patches. First, the encoder is used to encode the visual patches. Next, a learnable (shared) mask token is added at the positions of the masked patches. The decoder takes the encoded visual patches and mask tokens as input and reconstructs raw pixel values for the masked positions.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/vit-mae) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import AutoFeatureExtractor, ViTMAEForPreTraining
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/vit-mae-huge')
model = ViTMAEForPreTraining.from_pretrained('facebook/vit-mae-huge')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
loss = outputs.loss
mask = outputs.mask
ids_restore = outputs.ids_restore
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2111-06377,
author = {Kaiming He and
Xinlei Chen and
Saining Xie and
Yanghao Li and
Piotr Doll{\'{a}}r and
Ross B. Girshick},
title = {Masked Autoencoders Are Scalable Vision Learners},
journal = {CoRR},
volume = {abs/2111.06377},
year = {2021},
url = {https://arxiv.org/abs/2111.06377},
eprinttype = {arXiv},
eprint = {2111.06377},
timestamp = {Tue, 16 Nov 2021 12:12:31 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-06377.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggingartists/eminem | ead4f2b57f8b9c818e49e5df460593dd8c0ec318 | 2022-07-14T16:45:19.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/eminem",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
] | text-generation | false | huggingartists | null | huggingartists/eminem | 172 | null | transformers | 3,799 | ---
language: en
datasets:
- huggingartists/eminem
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/c7367126e7e6ebc13fcea9d4efca0204.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Eminem</div>
<a href="https://genius.com/artists/eminem">
<div style="text-align: center; font-size: 14px;">@eminem</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Eminem.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/eminem).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/eminem")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2h8vhx6h/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Eminem's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/pgt39elq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/pgt39elq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/eminem')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/eminem")
model = AutoModelWithLMHead.from_pretrained("huggingartists/eminem")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.