modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
balamurugan1603/bert-finetuned-ner | balamurugan1603 | 2021-11-25T17:00:00Z | 19 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | # Named Entity Recognition using Transformers
This is a Fine-tuned version of BERT using HuggingFace transformers to perform Named Entity Recognition on Text data. BERT is a state-of-the-art model with attention mechanism as underlying architecture trained with masked-language-modeling and next-sentence-prediction objectives, used for various tasks including Question answering systems, Text Summarization, etc... which can also perform token classification tasks such as NER with great performance.
# Dataset
**CoNLL-2003** :
The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations, and names of miscellaneous entities that do not belong to the previous three groups.<br><br>
**Link** : https://huggingface.co/datasets/conll2003
# Using this fine-tuned version
From python, download the whole pipeline and use it instantly using the following code :
```
from transformers import pipeline
# Loading the pipeline from hub
# Pipeline handles the preprocessing and post processing steps
model_checkpoint = "balamurugan1603/bert-finetuned-ner"
namedEntityRecogniser = pipeline(
"token-classification", model=model_checkpoint, aggregation_strategy="simple"
)
```
Reference for using this pipeline to find NER tags can be found in this <a href="https://github.com/balamurugan1603/Named-Entity-Recognition-using-Tranformers/blob/main/named-entity-recognition-using-transfer-learning.ipynb">notebook</a>.
|
abdouaziiz/bert-base-wolof | abdouaziiz | 2021-11-25T16:35:19Z | 16 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"language-model",
"wo",
"wolof",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: wo
tags:
- bert
- language-model
- wo
- wolof
---
# Soraberta: Unsupervised Language Model Pre-training for Wolof
**bert-base-wolof** is pretrained bert-base model on wolof language .
## Soraberta models
| Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters |
| :------: | :---: | :---: | :---: | :---: |
| `bert-base` | 6 | 12 | 514 | 56931622 M |
## Using Soraberta with Hugging Face's Transformers
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='abdouaziiz/bert-base-wolof')
>>> unmasker("kuy yoot du [MASK].")
[{'sequence': '[CLS] kuy yoot du seqet. [SEP]',
'score': 0.09505125880241394,
'token': 13578},
{'sequence': '[CLS] kuy yoot du daw. [SEP]',
'score': 0.08882280439138412,
'token': 679},
{'sequence': '[CLS] kuy yoot du yoot. [SEP]',
'score': 0.057790059596300125,
'token': 5117},
{'sequence': '[CLS] kuy yoot du seqat. [SEP]',
'score': 0.05671025067567825,
'token': 4992},
{'sequence': '[CLS] kuy yoot du yaqu. [SEP]',
'score': 0.0469999685883522,
'token': 1735}]
```
## Training data
The data sources are [Bible OT](http://biblewolof.com/) , [WOLOF-ONLINE](http://www.wolof-online.com/)
[ALFFA_PUBLIC](https://github.com/getalp/ALFFA_PUBLIC/tree/master/ASR/WOLOF)
## Contact
Please contact [email protected] for any question, feedback or request. |
espnet/kan-bayashi_csj_asr_train_asr_conformer | espnet | 2021-11-25T09:30:10Z | 5 | 1 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"jp",
"dataset:csj",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: jp
datasets:
- csj
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/kan-bayashi_csj_asr_train_asr_conformer`
This model was trained by Nelson Yalta using csj recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 0d8cd47dd3572248b502bc831cd305e648170233
pip install -e .
cd egs2/csj/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/kan-bayashi_csj_asr_train_asr_conformer
```
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_raw_char_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 47308
dist_launcher: null
multiprocessing_distributed: true
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 6
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
pretrain_path: []
pretrain_key: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 15000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_sp/train/speech_shape
- exp/asr_stats_raw_sp/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_sp/valid/speech_shape
- exp/asr_stats_raw_sp/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_nodup_sp/wav.scp
- speech
- sound
- - dump/raw/train_nodup_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/train_dev/wav.scp
- speech
- sound
- - dump/raw/train_dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- "\u306E"
- "\u3044"
- "\u3067"
- "\u3068"
- "\u30FC"
- "\u3066"
- "\u3046"
- "\u307E"
- "\u3059"
- "\u3057"
- "\u306B"
- "\u3063"
- "\u306A"
- "\u3048"
- "\u305F"
- "\u3053"
- "\u304C"
- "\u304B"
- "\u306F"
- "\u308B"
- "\u3042"
- "\u3093"
- "\u308C"
- "\u3082"
- "\u3092"
- "\u305D"
- "\u308A"
- "\u3089"
- "\u3051"
- "\u304F"
- "\u3069"
- "\u3088"
- "\u304D"
- "\u3060"
- "\u304A"
- "\u30F3"
- "\u306D"
- "\u4E00"
- "\u3055"
- "\u30B9"
- "\u8A00"
- "\u3061"
- "\u3064"
- "\u5206"
- "\u30C8"
- "\u3084"
- "\u4EBA"
- "\u30EB"
- "\u601D"
- "\u308F"
- "\u6642"
- "\u65B9"
- "\u3058"
- "\u30A4"
- "\u884C"
- "\u4F55"
- "\u307F"
- "\u5341"
- "\u30E9"
- "\u4E8C"
- "\u672C"
- "\u8A9E"
- "\u5927"
- "\u7684"
- "\u30AF"
- "\u30BF"
- "\u308D"
- "\u3070"
- "\u3087"
- "\u3083"
- "\u97F3"
- "\u51FA"
- "\u305B"
- "\u30C3"
- "\u5408"
- "\u65E5"
- "\u4E2D"
- "\u751F"
- "\u4ECA"
- "\u898B"
- "\u30EA"
- "\u9593"
- "\u8A71"
- "\u3081"
- "\u30A2"
- "\u5F8C"
- "\u81EA"
- "\u305A"
- "\u79C1"
- "\u30C6"
- "\u4E0A"
- "\u5E74"
- "\u5B66"
- "\u4E09"
- "\u30B7"
- "\u5834"
- "\u30C7"
- "\u5B9F"
- "\u5B50"
- "\u4F53"
- "\u8003"
- "\u5BFE"
- "\u7528"
- "\u6587"
- "\u30D1"
- "\u5F53"
- "\u7D50"
- "\u5EA6"
- "\u5165"
- "\u8A33"
- "\u30D5"
- "\u98A8"
- "\u30E0"
- "\u30D7"
- "\u6700"
- "\u30C9"
- "\u30EC"
- "\u30ED"
- "\u4F5C"
- "\u6570"
- "\u76EE"
- "\u30B8"
- "\u95A2"
- "\u30B0"
- "\u767A"
- "\u8005"
- "\u5B9A"
- "\u3005"
- "\u3050"
- "\u30B3"
- "\u4E8B"
- "\u624B"
- "\u5168"
- "\u5909"
- "\u30DE"
- "\u6027"
- "\u8868"
- "\u4F8B"
- "\u52D5"
- "\u8981"
- "\u5148"
- "\u524D"
- "\u610F"
- "\u90E8"
- "\u4F1A"
- "\u6301"
- "\u30E1"
- "\u5316"
- "\u9054"
- "\u4ED8"
- "\u5F62"
- "\u73FE"
- "\u4E94"
- "\u30AB"
- "\u3079"
- "\u53D6"
- "\u56DE"
- "\u5E38"
- "\u4F7F"
- "\u611F"
- "\u66F8"
- "\u6C17"
- "\u6CD5"
- "\u7A0B"
- "\u3071"
- "\u56DB"
- "\u591A"
- "\u8272"
- "\u30BB"
- "\u7406"
- "\u975E"
- "\u30D0"
- "\u58F0"
- "\u5358"
- "\u756A"
- "\uFF21"
- "\u6210"
- "\u540C"
- "\u901A"
- "\u30A3"
- "\u679C"
- "\u30AD"
- "\u554F"
- "\u984C"
- "\u69CB"
- "\u56FD"
- "\u6765"
- "\u9AD8"
- "\u6B21"
- "\u9A13"
- "\u3052"
- "\u30C1"
- "\u4EE5"
- "\u3054"
- "\u4EE3"
- "\u30E2"
- "\u30AA"
- "\u51C4"
- "\u7279"
- "\u77E5"
- "\u30E5"
- "\u7269"
- "\u660E"
- "\u70B9"
- "\u5473"
- "\u767E"
- "\u89E3"
- "\u8FD1"
- "\u8B58"
- "\u5730"
- "\u540D"
- "\u805E"
- "\u4E0B"
- "\u5C0F"
- "\u6559"
- "\u30B5"
- "\u70BA"
- "\u4E5D"
- "\u30D6"
- "\u5BB6"
- "\u30CB"
- "\u521D"
- "\u30D9"
- "\u30E7"
- "\u5C11"
- "\u8A8D"
- "\u8AD6"
- "\u529B"
- "\u516D"
- "\u30D3"
- "\u60C5"
- "\u7FD2"
- "\u30A6"
- "\u7ACB"
- "\u5FC3"
- "\u8ABF"
- "\u5831"
- "\u30A8"
- "\uFF24"
- "\uFF2E"
- "\u793A"
- "\u793E"
- "\u9055"
- "\u969B"
- "\u3056"
- "\u8AAC"
- "\u5FDC"
- "\u98DF"
- "\u72B6"
- "\u9577"
- "\u7814"
- "\u6821"
- "\u5185"
- "\u639B"
- "\u30DF"
- "\u5916"
- "\u5411"
- "\u80FD"
- "\u516B"
- "\u9762"
- "\u7A76"
- "\u7136"
- "\u3073"
- "\u30D4"
- "\u4E3B"
- "\u4FC2"
- "\u5024"
- "\u91CD"
- "\u8A5E"
- "\u4F9B"
- "\u5F97"
- "\u5FC5"
- "\u5973"
- "\u78BA"
- "\u7D42"
- "\u30BA"
- "\u6BCD"
- "\u696D"
- "\u7387"
- "\u65B0"
- "\u6D3B"
- "\u697D"
- "\u8449"
- "\u8A08"
- "\u30CA"
- "\u3080"
- "\u6240"
- "\u4E16"
- "\u6B63"
- "\u30E3"
- "\u8A18"
- "\u671F"
- "\u5207"
- "\u3078"
- "\u6A5F"
- "\u30DA"
- "\u5343"
- "\u985E"
- "\u5143"
- "\u614B"
- "\u826F"
- "\u5728"
- "\u6709"
- "\u30C0"
- "\u4E03"
- "\uFF23"
- "\u5225"
- "\u30EF"
- "\u691C"
- "\u7D9A"
- "\u9078"
- "\u57FA"
- "\u76F8"
- "\u6708"
- "\u4FA1"
- "\u7D20"
- "\u4ED6"
- "\u6BD4"
- "\u9023"
- "\u96C6"
- "\u30A7"
- "\u307B"
- "\u4F4D"
- "\u597D"
- "\uFF2D"
- "\u5F37"
- "\u4E0D"
- "\u5FA1"
- "\u6790"
- "\u30DD"
- "\u7121"
- "\u89AA"
- "\u53D7"
- "\u3086"
- "\u7F6E"
- "\u8C61"
- "\u4ED5"
- "\u5F0F"
- "\u30CD"
- "\u6307"
- "\u8AAD"
- "\u6C7A"
- "\u8ECA"
- "\u96FB"
- "\u904E"
- "\u30B1"
- "\u8A55"
- "\u5229"
- "\u6B8B"
- "\u8D77"
- "\u30CE"
- "\u7D4C"
- "\u56F3"
- "\u4F1D"
- "\u500B"
- "\u30C4"
- "\u7BC0"
- "\u9053"
- "\u5E73"
- "\u91D1"
- "\u899A"
- "\uFF34"
- "\u4F4F"
- "\u59CB"
- "\u63D0"
- "\u5B58"
- "\u5171"
- "\u30DB"
- "\u7B2C"
- "\u7D44"
- "\u89B3"
- "\u80B2"
- "\u6771"
- "\u305E"
- "\u958B"
- "\u52A0"
- "\u5F15"
- "\uFF33"
- "\u53E3"
- "\u6C34"
- "\u5BB9"
- "\u5468"
- "\u5B87"
- "\u7D04"
- "\u5B57"
- "\u3076"
- "\u9803"
- "\u3072"
- "\u5B99"
- "\u6BB5"
- "\u30BD"
- "\u97FF"
- "\u30DC"
- "\u53CB"
- "\u91CF"
- "\u6599"
- "\u3085"
- "\u5CF6"
- "\u8EAB"
- "\u76F4"
- "\u753B"
- "\u7DDA"
- "\u54C1"
- "\u5DEE"
- "\u4EF6"
- "\u9069"
- "\u5F35"
- "\u8FBA"
- "\u8FBC"
- "\u91CE"
- "\u69D8"
- "\u578B"
- "\u4E88"
- "\u7A2E"
- "\u5074"
- "\u8FF0"
- "\u5C71"
- "\u5C4B"
- "\u5E30"
- "\u30CF"
- "\u4E57"
- "\u539F"
- "\u683C"
- "\u8CEA"
- "\u666E"
- "\uFF30"
- "\u9020"
- "\u753A"
- "\u30B4"
- "\u82F1"
- "\u63A5"
- "\u304E"
- "\u6E2C"
- "\u3075"
- "\u7FA9"
- "\u4EAC"
- "\u5272"
- "\u5236"
- "\u7B54"
- "\u5404"
- "\u4FE1"
- "\u754C"
- "\u6211"
- "\u7A7A"
- "\uFF0E"
- "\u7740"
- "\u53EF"
- "\u66F4"
- "\u6D77"
- "\u4E0E"
- "\u9032"
- "\u52B9"
- "\u5F7C"
- "\u771F"
- "\u7530"
- "\u5FB4"
- "\u6D41"
- "\u5177"
- "\uFF32"
- "\u5E02"
- "\u67FB"
- "\u5B89"
- "\uFF22"
- "\u5E83"
- "\u50D5"
- "\u6CE2"
- "\u5C40"
- "\u8A2D"
- "\u7537"
- "\u767D"
- "\u30B6"
- "\u53CD"
- "\u6226"
- "\u533A"
- "\u6C42"
- "\u96D1"
- "\uFF29"
- "\u6B69"
- "\u8CB7"
- "\u982D"
- "\u7B97"
- "\u534A"
- "\u4FDD"
- "\u5E03"
- "\u96E3"
- "\uFF2C"
- "\u5224"
- "\u843D"
- "\u8DB3"
- "\u5E97"
- "\u7533"
- "\u8FD4"
- "\u30AE"
- "\u4E07"
- "\u6728"
- "\u6614"
- "\u8F03"
- "\u7D22"
- "\uFF26"
- "\u30B2"
- "\u6B86"
- "\u60AA"
- "\u5883"
- "\u548C"
- "\u907A"
- "\u57DF"
- "\u968E"
- "\u542B"
- "\u305C"
- "\u30BC"
- "\u65AD"
- "\u9650"
- "\u63A8"
- "\u4F4E"
- "\u5F71"
- "\u898F"
- "\u6319"
- "\u90FD"
- "\u307C"
- "\u6848"
- "\u4EEE"
- "\u88AB"
- "\u547C"
- "\u30A1"
- "\u96E2"
- "\u7CFB"
- "\u79FB"
- "\u30AC"
- "\u5DDD"
- "\u6E96"
- "\u904B"
- "\u6761"
- "\u5FF5"
- "\u6C11"
- "\uFF27"
- "\u7236"
- "\u75C5"
- "\u79D1"
- "\u4E21"
- "\u7531"
- "\u8A66"
- "\u56E0"
- "\u547D"
- "\u795E"
- "\uFF28"
- "\u7570"
- "\u7C21"
- "\u53E4"
- "\u6F14"
- "\u5897"
- "\u51E6"
- "\u8B70"
- "\u7DD2"
- "\u7CBE"
- "\u6613"
- "\u53F7"
- "\u65CF"
- "\u52FF"
- "\u60F3"
- "\u5217"
- "\u5C0E"
- "\u8EE2"
- "\u54E1"
- "\u30E6"
- "\u6BCE"
- "\u8996"
- "\u4E26"
- "\u98DB"
- "\u4F3C"
- "\u6620"
- "\u7D71"
- "\u4EA4"
- "\u30D2"
- "\u6B4C"
- "\u5F85"
- "\u8CC7"
- "\u8907"
- "\u8AA4"
- "\u63DB"
- "\u6A19"
- "\u6CC1"
- "\u914D"
- "\u62BD"
- "\u822C"
- "\u7403"
- "\u9006"
- "\u65C5"
- "\u6628"
- "\u9662"
- "\u99C5"
- "\u74B0"
- "\u5BDF"
- "\u516C"
- "\u6B73"
- "\u5C5E"
- "\u8F9E"
- "\u5947"
- "\u6CBB"
- "\u5E7E"
- "\u82E5"
- "\u58F2"
- "\u632F"
- "\u7686"
- "\u6CE8"
- "\u6B74"
- "\u9805"
- "\u5F93"
- "\u5747"
- "\u5F79"
- "\u9806"
- "\u53BB"
- "\u56E3"
- "\u8853"
- "\u7DF4"
- "\u6FC0"
- "\u6982"
- "\u66FF"
- "\u7B49"
- "\u98F2"
- "\u53F2"
- "\u88DC"
- "\u901F"
- "\u53C2"
- "\u65E9"
- "\u53CE"
- "\u9332"
- "\u671D"
- "\u5186"
- "\u5370"
- "\u5668"
- "\u63A2"
- "\u7D00"
- "\u9001"
- "\u6E1B"
- "\u571F"
- "\u5929"
- "\uFF2F"
- "\u50BE"
- "\u72AC"
- "\u9060"
- "\u5E2F"
- "\u52A9"
- "\u6A2A"
- "\u591C"
- "\u7523"
- "\u8AB2"
- "\u5BA2"
- "\u629E"
- "\u5712"
- "\u4E38"
- "\u50CF"
- "\u50CD"
- "\u6750"
- "\u5DE5"
- "\u904A"
- "\u544A"
- "\u523A"
- "\u6539"
- "\u8D64"
- "\u8074"
- "\u4ECB"
- "\u8077"
- "\u53F0"
- "\u77ED"
- "\u8AB0"
- "\u7D30"
- "\u672A"
- "\u770C"
- "\u9928"
- "\u6B62"
- "\u53F3"
- "\u306C"
- "\u3065"
- "\u56F2"
- "\u8A0E"
- "\u6B7B"
- "\u5EFA"
- "\u592B"
- "\u7AE0"
- "\u964D"
- "\u666F"
- "\u706B"
- "\u30A9"
- "\u9E97"
- "\u8B1B"
- "\u72EC"
- "\u5DE6"
- "\u5C64"
- "\uFF25"
- "\u5C55"
- "\u653F"
- "\u5099"
- "\u4F59"
- "\u7D76"
- "\u5065"
- "\u518D"
- "\u9580"
- "\u5546"
- "\u52DD"
- "\u52C9"
- "\u82B1"
- "\u30E4"
- "\u8EF8"
- "\u97FB"
- "\u66F2"
- "\u6574"
- "\u652F"
- "\u6271"
- "\u53E5"
- "\u6280"
- "\u5317"
- "\u30D8"
- "\u897F"
- "\u5247"
- "\u4FEE"
- "\u6388"
- "\u9031"
- "\u5BA4"
- "\u52D9"
- "\u9664"
- "\u533B"
- "\u6563"
- "\u56FA"
- "\u7AEF"
- "\u653E"
- "\u99AC"
- "\u7A4D"
- "\u8208"
- "\u592A"
- "\u5ACC"
- "\u9F62"
- "\u672B"
- "\u7D05"
- "\u6E90"
- "\u6E80"
- "\u5931"
- "\u5BDD"
- "\u6D88"
- "\u6E08"
- "\u4FBF"
- "\u983C"
- "\u4F01"
- "\u5B8C"
- "\u4F11"
- "\u9752"
- "\u7591"
- "\u8D70"
- "\u6975"
- "\u767B"
- "\u8AC7"
- "\u6839"
- "\u6025"
- "\u512A"
- "\u7D75"
- "\u623B"
- "\u5E2B"
- "\u5F59"
- "\u6DF7"
- "\u8DEF"
- "\u7E70"
- "\uFF2B"
- "\u8A3C"
- "\u713C"
- "\u6562"
- "\u5BB3"
- "\u96F6"
- "\u6253"
- "\u82E6"
- "\u7701"
- "\u7D19"
- "\u5C02"
- "\u8DDD"
- "\u9854"
- "\u8D8A"
- "\u4E89"
- "\u56F0"
- "\u5BC4"
- "\u5199"
- "\u4E92"
- "\u6DF1"
- "\u5A5A"
- "\u7DCF"
- "\u89A7"
- "\u80CC"
- "\u7BC9"
- "\u6E29"
- "\u8336"
- "\u62EC"
- "\u8CA0"
- "\u590F"
- "\u89E6"
- "\u7D14"
- "\u9045"
- "\u58EB"
- "\u96A3"
- "\u6050"
- "\u91C8"
- "\u967A"
- "\u5150"
- "\u5BBF"
- "\u6A21"
- "\u77F3"
- "\u983B"
- "\u5B09"
- "\u5EA7"
- "\u7642"
- "\u7E4B"
- "\uFF38"
- "\u5C06"
- "\u8FFD"
- "\u5EAD"
- "\u6238"
- "\u5371"
- "\u5BC6"
- "\u5DF1"
- "\u9014"
- "\u7BC4"
- "\u99C4"
- "\u7D39"
- "\u4EFB"
- "\u968F"
- "\u5357"
- "\uFF11"
- "\u5EB7"
- "\u9818"
- "\u5FD8"
- "\u3045"
- "\u59FF"
- "\u7F8E"
- "\u55B6"
- "\u6349"
- "\u65E2"
- "\u7167"
- "\uFF2A"
- "\u4EF2"
- "\u9152"
- "\u52E2"
- "\u9ED2"
- "\u5149"
- "\u6E21"
- "\u75DB"
- "\u62C5"
- "\u5F31"
- "\u307D"
- "\uFF36"
- "\u7D0D"
- "\u629C"
- "\u5E45"
- "\u6D17"
- "\u7A81"
- "\u671B"
- "\u5373"
- "\u9858"
- "\u7565"
- "\uFF12"
- "\u9811"
- "\u5FD7"
- "\u5B85"
- "\u7247"
- "\u656C"
- "\u6751"
- "\u60B2"
- "\u81A8"
- "\u89D2"
- "\u30E8"
- "\u4F9D"
- "\u8A73"
- "\u5F8B"
- "\u9B5A"
- "\u52B4"
- "\u5A66"
- "\u6163"
- "\u732B"
- "\u5019"
- "\u8001"
- "\u558B"
- "\u79F0"
- "\u796D"
- "\u7FA4"
- "\u7E2E"
- "\u6C38"
- "\u616E"
- "\u5EF6"
- "\u7A3F"
- "\u611B"
- "\u8089"
- "\u9589"
- "\u8CBB"
- "\u6295"
- "\u6D3E"
- "\u81F4"
- "\u7BA1"
- "\u7C73"
- "\u5E95"
- "\u7D99"
- "\u6C0F"
- "\u690D"
- "\u501F"
- "\u5727"
- "\u52E4"
- "\u6F22"
- "\u66AE"
- "\u5F27"
- "\u88C5"
- "\u57CE"
- "\u5287"
- "\u76DB"
- "\u63F4"
- "\u9244"
- "\u8C37"
- "\u5E72"
- "\u7E26"
- "\u8A31"
- "\u6016"
- "\u9A5A"
- "\u8A8C"
- "\uFF35"
- "\u8B77"
- "\u5B88"
- "\u8033"
- "\u6B32"
- "\u8239"
- "\uFF10"
- "\u5178"
- "\u67D3"
- "\u7D1A"
- "\u98FE"
- "\u5144"
- "\u71B1"
- "\u8F09"
- "\u88FD"
- "\u5BFA"
- "\u662D"
- "\u7FFB"
- "\u5426"
- "\u5584"
- "\u62BC"
- "\u53CA"
- "\u6A29"
- "\u559C"
- "\u670D"
- "\u8CB0"
- "\u8EFD"
- "\u677F"
- "\u61B6"
- "\u98FC"
- "\u5C3E"
- "\u5FA9"
- "\u5E78"
- "\u7389"
- "\u5354"
- "\u679A"
- "\u90CE"
- "\u8840"
- "\u524A"
- "\u5922"
- "\u63A1"
- "\u6674"
- "\u6B20"
- "\u602A"
- "\u65BD"
- "\u7DE8"
- "\u98EF"
- "\u7B56"
- "\u9000"
- "\uFF39"
- "\u8349"
- "\u61F8"
- "\u6458"
- "\u58CA"
- "\u4F38"
- "\u85AC"
- "\u9996"
- "\u5BFF"
- "\u53B3"
- "\u606F"
- "\u5C45"
- "\u643A"
- "\u9F3B"
- "\u9280"
- "\u4EA1"
- "\u6CCA"
- "\u8857"
- "\u9759"
- "\u9CE5"
- "\u677E"
- "\u5F92"
- "\u969C"
- "\u7B4B"
- "\u7559"
- "\u51B7"
- "\u5C24"
- "\u68EE"
- "\u5438"
- "\u5012"
- "\u68B0"
- "\u6D0B"
- "\u821E"
- "\u6A4B"
- "\u500D"
- "\u6255"
- "\u5352"
- "\u7E04"
- "\u6C5A"
- "\u53F8"
- "\u6625"
- "\u793C"
- "\u66DC"
- "\u6545"
- "\u526F"
- "\u5F01"
- "\u5439"
- "\u85E4"
- "\u8DE1"
- "\u962A"
- "\u4E86"
- "\u91E3"
- "\u9632"
- "\u7834"
- "\u6012"
- "\u662F"
- "\u30A5"
- "\u7AF6"
- "\u8179"
- "\u4E95"
- "\u4E08"
- "\u64AE"
- "\u72ED"
- "\u5BD2"
- "\u7B46"
- "\u5965"
- "\u8C4A"
- "\u732E"
- "\u5C31"
- "\u5A18"
- "\u79D2"
- "\u6C5F"
- "\u8E0F"
- "\u8A13"
- "\u7372"
- "\u96E8"
- "\u6BBA"
- "\u57CB"
- "\u64CD"
- "\u9AA8"
- "\u8D85"
- "\u6D5C"
- "\u8B66"
- "\u7DD1"
- "\u7D61"
- "\u8133"
- "\u7B11"
- "\u6D6E"
- "\u7D66"
- "\u7126"
- "\u8A70"
- "\u878D"
- "\u738B"
- "\u5C3A"
- "\u5E7C"
- "\u820C"
- "\u663C"
- "\u88CF"
- "\u6CE3"
- "\u67C4"
- "\u9396"
- "\u62E1"
- "\u8A3A"
- "\u7DE0"
- "\u5B98"
- "\u6697"
- "\u820E"
- "\u6298"
- "\u5264"
- "\u4E73"
- "\u6B6F"
- "\u7248"
- "\u5C04"
- "\u8108"
- "\u9707"
- "\u7802"
- "\u4F34"
- "\u72AF"
- "\u4F50"
- "\u5DDE"
- "\u8FB2"
- "\u8DA3"
- "\u990A"
- "\u675F"
- "\u6E2F"
- "\u8FEB"
- "\u5F3E"
- "\u798F"
- "\u51AC"
- "\u541B"
- "\u6B66"
- "\u77AC"
- "\u67A0"
- "\u6CA2"
- "\u661F"
- "\u5BCC"
- "\u6557"
- "\u5D0E"
- "\u6355"
- "\u8377"
- "\u5F1F"
- "\u95BE"
- "\u7E54"
- "\u7C89"
- "\u725B"
- "\u8DF5"
- "\u9999"
- "\u6797"
- "\u83DC"
- "\u62CD"
- "\u63CF"
- "\u888B"
- "\u6607"
- "\u91DD"
- "\u8FCE"
- "\u585A"
- "\u5A46"
- "\uFF49"
- "\u8ECD"
- "\uFF13"
- "\uFF37"
- "\u5BC2"
- "\u8F29"
- "\u3074"
- "\u5DFB"
- "\u4E01"
- "\u504F"
- "\u79CB"
- "\u5E9C"
- "\u6CC9"
- "\u81F3"
- "\u6368"
- "\u7956"
- "\u8584"
- "\u5B97"
- "\u5FB9"
- "\u93E1"
- "\u75C7"
- "\u6CB9"
- "\u8131"
- "\u9CF4"
- "\u7AE5"
- "\u6BDB"
- "\u9077"
- "\u84CB"
- "\u58C1"
- "\u5915"
- "\u5589"
- "\u907F"
- "\u984D"
- "\u6EA2"
- "\u96F0"
- "\u4EE4"
- "\u59C9"
- "\u63E1"
- "\u3077"
- "\u523B"
- "\u62E0"
- "\u8CA1"
- "\u8FF7"
- "\u9063"
- "\u82B8"
- "\u5E8F"
- "\u76E3"
- "\u8457"
- "\u5869"
- "\u5009"
- "\u7F6A"
- "\u6F5C"
- "\u7D5E"
- "\u764C"
- "\u5BAE"
- "\u5E2D"
- "\u8F2A"
- "\u594F"
- "\u846C"
- "\u6C60"
- "\u6CBF"
- "\u5FAE"
- "\u5305"
- "\u76CA"
- "\u76AE"
- "\u4FC3"
- "\u6297"
- "\u5FEB"
- "\u66AB"
- "\u52E7"
- "\u8CA9"
- "\u8C46"
- "\u5B63"
- "\u529F"
- "\u9A12"
- "\uFF54"
- "\u97D3"
- "\u6ED1"
- "\u75B2"
- "\u9003"
- "\u9061"
- "\u5E79"
- "\u60A9"
- "\u83D3"
- "\u672D"
- "\u6804"
- "\u9177"
- "\u8B1D"
- "\u6C96"
- "\u96EA"
- "\u5360"
- "\u60D1"
- "\u63FA"
- "\u866B"
- "\u62B1"
- "\uFF4B"
- "\u5CA1"
- "\u6E9C"
- "\u8535"
- "\u7763"
- "\u6838"
- "\u4E71"
- "\u4E45"
- "\u9EC4"
- "\u9670"
- "\u7720"
- "\u7B26"
- "\u6B8A"
- "\u628A"
- "\u6291"
- "\u5E0C"
- "\u63C3"
- "\u6483"
- "\u5EAB"
- "\u5409"
- "\u6E6F"
- "\u65CB"
- "\u640D"
- "\u52AA"
- "\u64E6"
- "\u9769"
- "\u6E0B"
- "\u773C"
- "\u592E"
- "\u8CDE"
- "\u5374"
- "\u5948"
- "\u539A"
- "\u59D4"
- "\u83EF"
- "\u96A0"
- "\uFF4E"
- "\u30CC"
- "\u9BAE"
- "\u515A"
- "\u5C65"
- "\u8A98"
- "\u6469"
- "\u6162"
- "\u5442"
- "\u7206"
- "\u7BB1"
- "\u6075"
- "\u9678"
- "\u7DCA"
- "\u7E3E"
- "\u5742"
- "\u7B52"
- "\u7532"
- "\u5348"
- "\u5230"
- "\u8CAC"
- "\u5C0A"
- "\u6CF3"
- "\u6279"
- "\u7518"
- "\u5B6B"
- "\u7159"
- "\u8A2A"
- "\u50B7"
- "\u6E05"
- "\u716E"
- "\u88C1"
- "\u9694"
- "\u8ED2"
- "\uFF31"
- "\u7FBD"
- "\u5D29"
- "\u7A74"
- "\u7CD6"
- "\u707D"
- "\u5275"
- "\u6F70"
- "\u6691"
- "\u87BA"
- "\u653B"
- "\u6577"
- "\u6575"
- "\u76E4"
- "\u9732"
- "\u7A93"
- "\u63B2"
- "\u81E8"
- "\u53E9"
- "\u5145"
- "\u4FFA"
- "\u8F38"
- "\u967D"
- "\u6B27"
- "\u6687"
- "\u6B6A"
- "\u6DFB"
- "\u60A3"
- "\u5FD9"
- "\u70AD"
- "\u829D"
- "\u8EDF"
- "\u88D5"
- "\u7E01"
- "\u6F2B"
- "\u7A1A"
- "\u7968"
- "\u8A69"
- "\u5CB8"
- "\u7687"
- "\uFF4A"
- "\u6627"
- "\u5100"
- "\u5857"
- "\u8E0A"
- "\u8AF8"
- "\u6D74"
- "\u904D"
- "\u66D6"
- "\u5BE7"
- "\u99B4"
- "\u5339"
- "\u03B1"
- "\u627F"
- "\u30BE"
- "\u6383"
- "\u5375"
- "\u5999"
- "\u3043"
- "\u66B4"
- "\u62B5"
- "\u604B"
- "\u8863"
- "\u6EB6"
- "\u7DAD"
- "\u514D"
- "\u6392"
- "\u685C"
- "\u7573"
- "\u7B87"
- "\u6398"
- "\u535A"
- "\u6FC3"
- "\u7FCC"
- "\u8056"
- "\u7DB2"
- "\u885B"
- "\u64EC"
- "\u5E8A"
- "\u9178"
- "\u6669"
- "\u4E7E"
- "\u90AA"
- "\u7551"
- "\u6EDE"
- "\u5802"
- "\u7E41"
- "\u4ECF"
- "\u5FB3"
- "\u7DE9"
- "\u6A39"
- "\u6551"
- "\u633F"
- "\u68D2"
- "\u906D"
- "\u676F"
- "\u6065"
- "\u6E56"
- "\u6E09"
- "\u81D3"
- "\u8CB4"
- "\u723A"
- "\u7981"
- "\u4F75"
- "\u5263"
- "\u786C"
- "\u58C7"
- "\u80A9"
- "\u6D78"
- "\u4F0A"
- "\u5B9D"
- "\u6094"
- "\u8E8D"
- "\u6DB2"
- "\u99C6"
- "\u6D25"
- "\u307A"
- "\u6D45"
- "\u8B72"
- "\u5CA9"
- "\u9B45"
- "\u587E"
- "\u03B8"
- "\u6696"
- "\u6CB3"
- "\u8A95"
- "\u7F36"
- "\u5507"
- "\u80A2"
- "\u6328"
- "\u62F6"
- "\u7A0E"
- "\u50AC"
- "\u8A34"
- "\uFF58"
- "\u968A"
- "\u659C"
- "\u770B"
- "\uFF50"
- "\u6D66"
- "\u8352"
- "\uFF41"
- "\u71C3"
- "\u52A3"
- "\u5BA3"
- "\u8FBF"
- "\u790E"
- "\u62FE"
- "\u5C4A"
- "\u6905"
- "\u5EC3"
- "\u6749"
- "\u9AEA"
- "\u77E2"
- "\u67D4"
- "\u55AB"
- "\u73CD"
- "\u57FC"
- "\u88C2"
- "\u63B4"
- "\u59BB"
- "\u8CA7"
- "\u934B"
- "\u59A5"
- "\u59B9"
- "\u5175"
- "\uFF14"
- "\u623F"
- "\u5951"
- "\u65E8"
- "\uFF44"
- "\u0394"
- "\u5DE1"
- "\u8A02"
- "\u5F90"
- "\u8CC0"
- "\u7BED"
- "\u9810"
- "\u84C4"
- "\u8846"
- "\u5DE8"
- "\u5506"
- "\u65E6"
- "\u5531"
- "\u9047"
- "\u6E67"
- "\u8010"
- "\u96C4"
- "\u6D99"
- "\u8CB8"
- "\u822A"
- "\u5104"
- "\u5618"
- "\u6C37"
- "\u78C1"
- "\u679D"
- "\u8CAB"
- "\u61D0"
- "\u52DF"
- "\u8155"
- "\u65E7"
- "\u7AF9"
- "\u99D0"
- "\u8A72"
- "\uFF52"
- "\u5893"
- "\u518A"
- "\u80F8"
- "\u758E"
- "\u773A"
- "\uFF45"
- "\u9855"
- "\u631F"
- "\u55A7"
- "\u520A"
- "\u68C4"
- "\u990C"
- "\u67F1"
- "\u5800"
- "\u8ACB"
- "\u79D8"
- "\u6717"
- "\u96F2"
- "\u8170"
- "\u7A32"
- "\u828B"
- "\u8C9D"
- "\u5C48"
- "\u91CC"
- "\u508D"
- "\u8102"
- "\u6FC1"
- "\u54B2"
- "\u6BD2"
- "\u6EC5"
- "\u5629"
- "\u6442"
- "\u6E7E"
- "\u83CC"
- "\u8150"
- "\u5211"
- "\u5F25"
- "\u5AC1"
- "\u61A7"
- "\u4E18"
- "\u5C90"
- "\u52B1"
- "\u8CA2"
- "\u6C41"
- "\u96C7"
- "\u5076"
- "\u9774"
- "\u72D9"
- "\u719F"
- "\u900F"
- "\uFF59"
- "\u8CFC"
- "\u5319"
- "\uFF46"
- "\uFF15"
- "\u92AD"
- "\u6D12"
- "\u8A17"
- "\u809D"
- "\u963F"
- "\u80C3"
- "\uFF53"
- "\u885D"
- "\u621A"
- "\uFF4D"
- "\u84B8"
- "\u4FF3"
- "\u8972"
- "\u5265"
- "\u5BE9"
- "\u6817"
- "\u8A87"
- "\u5237"
- "\u7CF8"
- "\u90F7"
- "\u5049"
- "\u6C57"
- "\u53CC"
- "\u98FD"
- "\u77DB"
- "\u984E"
- "\u552F"
- "\u6590"
- "\u7DB4"
- "\u5B64"
- "\u90F5"
- "\u76D7"
- "\u9E7F"
- "\u8CC3"
- "\u76FE"
- "\u682A"
- "\u9ED9"
- "\u7C8B"
- "\u63DA"
- "\u9808"
- "\u7092"
- "\u9285"
- "\u5E81"
- "\u9B54"
- "\u75E9"
- "\u9802"
- "\u76BF"
- "\u970A"
- "\u5E55"
- "\u570F"
- "\u574A"
- "\u72C2"
- "\u8912"
- "\u9451"
- "\u50B5"
- "\u77AD"
- "\u565B"
- "\u5E33"
- "\u5782"
- "\u8870"
- "\u4ED9"
- "\u9EA6"
- "\u8CA8"
- "\u7AAA"
- "\u6F6E"
- "\u6FEF"
- "\u5238"
- "\u7D1B"
- "\u7384"
- "\u7C4D"
- "\uFF43"
- "\u74F6"
- "\u5DE3"
- "\u5192"
- "\u6CBC"
- "\u99D2"
- "\u5C3D"
- "\u517C"
- "\u7C97"
- "\u63BB"
- "\u80BA"
- "\u9154"
- "\uFF4C"
- "\u702C"
- "\u505C"
- "\u6F20"
- "\u673A"
- "\u916C"
- "\u4FD7"
- "\u8986"
- "\u5C3B"
- "\u9375"
- "\u5805"
- "\u6F2C"
- "\u2212"
- "\u79C0"
- "\u6885"
- "\u9042"
- "\u57F9"
- "\u871C"
- "\uFF42"
- "\u30FB"
- "\u52C7"
- "\u8ECC"
- "\u7F85"
- "\uFF3A"
- "\u5BB4"
- "\u8C5A"
- "\u7A3C"
- "\u62AB"
- "\u8CAF"
- "\u9EBB"
- "\u6C4E"
- "\u51DD"
- "\u5FE0"
- "\uFF55"
- "\u5F80"
- "\u8AE6"
- "\u8B19"
- "\u6F0F"
- "\u5410"
- "\u3047"
- "\u7652"
- "\u9663"
- "\u6D6A"
- "\u52D8"
- "\u53D9"
- "\u5200"
- "\u67B6"
- "\u57F7"
- "\u5674"
- "\u5197"
- "\u4E4F"
- "\u837B"
- "\u81ED"
- "\u708A"
- "\u598A"
- "\u808C"
- "\u8CDB"
- "\u5C0B"
- "\u9175"
- "\u757F"
- "\u5270"
- "\u706F"
- "\u8C6A"
- "\u9685"
- "\u9905"
- "\u7949"
- "\u80AF"
- "\u62DB"
- "\u7A3D"
- "\u5F6B"
- "\u5F69"
- "\u03B2"
- "\u6B04"
- "\u718A"
- "\u68CB"
- "\u6CB8"
- "\u6C88"
- "\u8339"
- "\u7ABA"
- "\u5B9C"
- "\u8217"
- "\u7CA7"
- "\u683D"
- "\u80AA"
- "\u9665"
- "\u6CE1"
- "\u95D8"
- "\u8F3F"
- "\u5353"
- "\u7070"
- "\u8F9B"
- "\u6F01"
- "\u9F13"
- "\u585E"
- "\u8CD1"
- "\u76C6"
- "\u68FA"
- "\u6311"
- "\u54F2"
- "\u9867"
- "\u8B21"
- "\u8302"
- "\u90A3"
- "\u80DE"
- "\u4F3A"
- "\u5A92"
- "\u708E"
- "\u67D0"
- "\u564C"
- "\u5203"
- "\u6F5F"
- "\u7656"
- "\u4E80"
- "\u63EE"
- "\u511F"
- "\u4E39"
- "\u7DEF"
- "\u9DB4"
- "\u4E4B"
- "\u6BB4"
- "\u4EF0"
- "\u5949"
- "\u7E2B"
- "\u75F4"
- "\u8650"
- "\u61B2"
- "\u71E5"
- "\u6DC0"
- "\uFF57"
- "\u88F8"
- "\u82BD"
- "\u63A7"
- "\u95A3"
- "\u7587"
- "\u925B"
- "\u8178"
- "\u5642"
- "\u935B"
- "\u654F"
- "\u9162"
- "\u938C"
- "\u81E3"
- "\u8E74"
- "\u5A01"
- "\u6D44"
- "\u7965"
- "\u795D"
- "\u86C7"
- "\u811A"
- "\u4F0F"
- "\u6F54"
- "\u5510"
- "\u6955"
- "\u57A3"
- "\u932F"
- "\u514B"
- "\u614C"
- "\u6BBF"
- "\u819C"
- "\u61A9"
- "\u9065"
- "\u82DB"
- "\u9676"
- "\u8997"
- "\u78E8"
- "\u624D"
- "\u5E1D"
- "\u642C"
- "\u722A"
- "\u90CA"
- "\u80A5"
- "\u819D"
- "\u62D2"
- "\u868A"
- "\u5208"
- "\u5132"
- "\uFF48"
- "\u596E"
- "\u7761"
- "\u5BEE"
- "\uFF17"
- "\u4FB5"
- "\u9B31"
- "\u635C"
- "\u6DBC"
- "\u5A20"
- "\u7363"
- "\u7C92"
- "\u963B"
- "\u6CE5"
- "\u7ADC"
- "\u91A4"
- "\u92ED"
- "\u6606"
- "\u9234"
- "\u7DBF"
- "\u830E"
- "\u8107"
- "\u7948"
- "\u8A60"
- "\u6B53"
- "\u7F70"
- "\u68DA"
- "\u83CA"
- "\u6069"
- "\u7267"
- "\u540A"
- "\u8DF3"
- "\u6DE1"
- "\u7F72"
- "\u596A"
- "\u9038"
- "\u6170"
- "\u5EB6"
- "\u9262"
- "\u8B5C"
- "\u5ECA"
- "\u5606"
- "\u62ED"
- "\u8CED"
- "\u99C1"
- "\u7F8A"
- "\u5384"
- "\u7D10"
- "\u9673"
- "\u816B"
- "\u6841"
- "\u9298"
- "\u96CC"
- "\u636E"
- "\u62DD"
- "\u60E8"
- "\u96DB"
- "\u845B"
- "\u7FA8"
- "\u609F"
- "\u76DF"
- "\u7E4A"
- "\u9192"
- "\u65EC"
- "\u6DAF"
- "\u8CC4"
- "\u6E7F"
- "\u6F02"
- "\u7D2B"
- "\u30F4"
- "\u4E9C"
- "\u8AA0"
- "\u5854"
- "\u5E4C"
- "\u80C6"
- "\u64A5"
- "\u865A"
- "\u6F64"
- "\u9699"
- "\u5F84"
- "\u6C72"
- "\u8CE2"
- "\u5BF8"
- "\u8888"
- "\u88DF"
- "\u8266"
- "\uFF19"
- "\u62D8"
- "\uFF47"
- "\u5841"
- "\u5BDB"
- "\u51A0"
- "\u614E"
- "\u971E"
- "\u731B"
- "\u67CF"
- "\u733F"
- "\u9084"
- "\u50E7"
- "\u53EB"
- "\u53F1"
- "\u72E9"
- "\u63C9"
- "\u7D2F"
- "\u5982"
- "\u7897"
- "\u6BBB"
- "\u906E"
- "\u5FCD"
- "\u6EF4"
- "\u6B96"
- "\u8D08"
- "\u74A7"
- "\u6F38"
- "\u6589"
- "\u03BC"
- "\u9686"
- "\u6176"
- "\u72A0"
- "\u7272"
- "\u5146"
- "\u576A"
- "\u6284"
- "\u65D7"
- "\u50DA"
- "\u5C3F"
- "\u51CD"
- "\u902E"
- "\u7B39"
- "\u8F1D"
- "\u5C1A"
- "\u8015"
- "\u51CC"
- "\u632B"
- "\u4F10"
- "\u7BB8"
- "\u4E91"
- "\u5968"
- "\u819A"
- "\u9010"
- "\u03B3"
- "\u5F26"
- "\u9700"
- "\u5C01"
- "\u5E3D"
- "\u6F31"
- "\u9283"
- "\u507D"
- "\u5875"
- "\u7E1B"
- "\u58A8"
- "\u6020"
- "\u96F7"
- "\u5766"
- "\u68A8"
- "\u90ED"
- "\u7A4F"
- "\u67FF"
- "\u7AFF"
- "\u5E61"
- "\u5F81"
- "\u99B3"
- "\u9EBA"
- "\u03C4"
- "\u8154"
- "\u7C98"
- "\u7409"
- "\u731F"
- "\u4EC1"
- "\u8358"
- "\u6492"
- "\u7C3F"
- "\u90E1"
- "\u7B4C"
- "\u5D8B"
- "\u6FE1"
- "\u618E"
- "\u5446"
- "\u6F15"
- "\u5A29"
- "\u68DF"
- "\u6052"
- "\uFF18"
- "\u5553"
- "\u5B5D"
- "\u67F3"
- "\u64A4"
- "\u85CD"
- "\u95C7"
- "\u5B22"
- "\u67F4"
- "\u6734"
- "\u6D1E"
- "\u5CB3"
- "\u9B3C"
- "\u8DE8"
- "\u3049"
- "\u70C8"
- "\u559A"
- "\u6F84"
- "\u6FEB"
- "\u82A6"
- "\u62D3"
- "\u51FD"
- "\u6843"
- "\u76F2"
- "\u6CA1"
- "\u7A6B"
- "\u6212"
- "\u99FF"
- "\u8D05"
- "\u67AF"
- "\u6C70"
- "\u53F6"
- "\u90A6"
- "\u66C7"
- "\u9A30"
- "\u711A"
- "\u51F6"
- "\u5CF0"
- "\u69FD"
- "\u67DA"
- "\u5320"
- "\u9A19"
- "\u502B"
- "\u84EE"
- "\u634C"
- "\u61F2"
- "\u8B0E"
- "\u91B8"
- "\u56DA"
- "\u7344"
- "\u6EDD"
- "\u6795"
- "\u60DC"
- "\u7DB1"
- "\u8B33"
- "\u7089"
- "\u5DFE"
- "\u91DC"
- "\u9BAB"
- "\u6E58"
- "\u92F3"
- "\u5351"
- "\uFF51"
- "\u7DBB"
- "\u5EF7"
- "\u85A6"
- "\u667A"
- "\u6C99"
- "\u8CBF"
- "\u8098"
- "\uFF16"
- "\u5F0A"
- "\u66F0"
- "\u7881"
- "\u9DFA"
- "\u6676"
- "\u8D74"
- "\u8513"
- "\u75D2"
- "\u79E9"
- "\u5DE7"
- "\u9418"
- "\u7B1B"
- "\u638C"
- "\u53EC"
- "\u5347"
- "\u6249"
- "\u5A2F"
- "\u8A1F"
- "\u8247"
- "\u64B2"
- "\uFF56"
- "\u6182"
- "\u90B8"
- "\u5098"
- "\u7CDE"
- "\u03BB"
- "\u5C16"
- "\u723D"
- "\u7832"
- "\u55A9"
- "\u80CE"
- "\u84B2"
- "\u9DF9"
- "\u755C"
- "\u6897"
- "\uFF4F"
- "\u5023"
- "\u6247"
- "\u7DFB"
- "\u6756"
- "\u622F"
- "\u5D50"
- "\u6A3D"
- "\u6F06"
- "\u9CE9"
- "\u039B"
- "\u5FAA"
- "\u8896"
- "\u9784"
- "\u6851"
- "\u5D16"
- "\u59A8"
- "\u66A6"
- "\u59D3"
- "\u7A00"
- "\u3041"
- "\u920D"
- "\u9727"
- "\u9837"
- "\u8105"
- "\u7B20"
- "\u86CD"
- "\u8328"
- "\u69CD"
- "\u3062"
- "\u59EB"
- "\u6ABB"
- "\u8463"
- "\u6C7D"
- "\u541F"
- "\u807E"
- "\u73E0"
- "\u62B9"
- "\u9D28"
- "\u64AB"
- "\u8607"
- "\u7AC3"
- "\u864E"
- "\u78EF"
- "\u77E9"
- "\u7CCA"
- "\u55AA"
- "\u8A6E"
- "\u82D1"
- "\u98F4"
- "\u6089"
- "\u674F"
- "\u9B42"
- "\u914C"
- "\u9BC9"
- "\u8A50"
- "\u03A3"
- "\u7815"
- "\u55DC"
- "\u7FFC"
- "\u4F0E"
- "\u751A"
- "\u5F66"
- "\u961C"
- "\u8706"
- "\u6109"
- "\u80F4"
- "\u8776"
- "\u8B00"
- "\u9271"
- "\u75E2"
- "\u73ED"
- "\u9438"
- "\u92F8"
- "\u62D9"
- "\u6068"
- "\u4EAD"
- "\u4EAB"
- "\u75AB"
- "\u5F13"
- "\u74E6"
- "\u7D46"
- "\u814E"
- "\u62F3"
- "\u9A0E"
- "\u58B3"
- "\u83F1"
- "\u6813"
- "\u5256"
- "\u6D2A"
- "\u5484"
- "\u9591"
- "\u58EE"
- "\u9945"
- "\u65ED"
- "\u8987"
- "\u80A1"
- "\u86D9"
- "\u724C"
- "\u965B"
- "\u714E"
- "\u63AC"
- "\u9AED"
- "\u9019"
- "\u5E7B"
- "\u54B3"
- "\u6E26"
- "\u55C5"
- "\u7A42"
- "\u7434"
- "\u5FCC"
- "\u70CF"
- "\u5448"
- "\u91D8"
- "\u611A"
- "\u6C3E"
- "\u8AFE"
- "\u6E9D"
- "\u7336"
- "\u7AAF"
- "\u8ACF"
- "\u8CC2"
- "\u57C3"
- "\u51F8"
- "\u7D0B"
- "\u6ADB"
- "\u525B"
- "\u98E2"
- "\u4FCA"
- "\u54C0"
- "\u5BB0"
- "\u93AE"
- "\u7435"
- "\u7436"
- "\u96C5"
- "\u8494"
- "\u85AA"
- "\u8A93"
- "\u59EA"
- "\u62D7"
- "\u8778"
- "\u7169"
- "\u7B51"
- "\u690E"
- "\u4FB6"
- "\u553E"
- "\u7BAA"
- "\u5075"
- "\u8861"
- "\u03C3"
- "\u88FE"
- "\u95B2"
- "\u805A"
- "\u4E3C"
- "\u633D"
- "\u7E4D"
- "\u82D7"
- "\u9E93"
- "\u03C6"
- "\u03B4"
- "\u4E32"
- "\u51E1"
- "\u5F18"
- "\u85FB"
- "\u61C7"
- "\u817F"
- "\u7A9F"
- "\u6803"
- "\u6652"
- "\u5E84"
- "\u7891"
- "\u7B4F"
- "\u7B25"
- "\u5E06"
- "\u96B7"
- "\u8FB0"
- "\u75BE"
- "\u8FE6"
- "\u8A6B"
- "\u5617"
- "\u582A"
- "\u6842"
- "\u5B9B"
- "\u58F7"
- "\u8AED"
- "\u97AD"
- "\u9310"
- "\u6DF5"
- "\u79E4"
- "\u7525"
- "\u4F8D"
- "\u66FD"
- "\u6572"
- "\u63AA"
- "\u6168"
- "\u83E9"
- "\u5CE0"
- "\u901D"
- "\u5F70"
- "\u67F5"
- "\u82AF"
- "\u7C50"
- "\u57A2"
- "\u03BE"
- "\u77EF"
- "\u8C8C"
- "\u8F44"
- "\u8A89"
- "\u9813"
- "\u7D79"
- "\u9E78"
- "\u5E7D"
- "\u6881"
- "\u642D"
- "\u54BD"
- "\u82B3"
- "\u7729"
- "\u0393"
- "\u61A4"
- "\u7985"
- "\u6063"
- "\u5840"
- "\u7149"
- "\u75FA"
- "\uFF06"
- "\u7A40"
- "\u545F"
- "\u918D"
- "\u9190"
- "\u7901"
- "\u51F9"
- "\u86EE"
- "\u5974"
- "\u64AD"
- "\u7E79"
- "\u8499"
- "\u8A63"
- "\u4E5F"
- "\u5420"
- "\u4E59"
- "\u8E8A"
- "\u8E87"
- "\u9D2C"
- "\u7A92"
- "\u59E5"
- "\u9326"
- "\u694A"
- "\u8017"
- "\u6F09"
- "\u60E7"
- "\u4FE3"
- "\u6876"
- "\u5CFB"
- "\u905C"
- "\u65FA"
- "\u75D5"
- "\u03A6"
- "\u6234"
- "\u658E"
- "\u8CD3"
- "\u7BC7"
- "\u8429"
- "\u85E9"
- "\u7950"
- "\u8B83"
- "\u83AB"
- "\u9C39"
- "\u85A9"
- "\u5378"
- "\u4E9B"
- "\u75B9"
- "\u8E44"
- "\u4E56"
- "\uFF5A"
- "\u92FC"
- "\u6A3A"
- "\u5B8F"
- "\u7BE4"
- "\u8258"
- "\u81B3"
- "\u7A83"
- "\u7E82"
- "\u5598"
- "\u786B"
- "\u99D5"
- "\u7261"
- "\u732A"
- "\u62D0"
- "\u60DA"
- "\u60A0"
- "\u7CE7"
- "\u95A5"
- "\u03C0"
- "\u853D"
- "\u6850"
- "\u981A"
- "\u9214"
- "\u697C"
- "\u8C9E"
- "\u602F"
- "\u817A"
- "\u8305"
- "\u6CF0"
- "\u9913"
- "\u5C51"
- "\u9BDB"
- "\u929B"
- "\u9AB8"
- "\u9C57"
- "\u5824"
- "\u9675"
- "\u6DD8"
- "\u64C1"
- "\u81FC"
- "\u6D32"
- "\u8FBB"
- "\u8A23"
- "\u5C4F"
- "\u9BE8"
- "\u895F"
- "\u5CE1"
- "\u660C"
- "\u982C"
- "\u5806"
- "\u865C"
- "\u840E"
- "\u9EB9"
- "\u7CE0"
- "\u68B1"
- "\u8AFA"
- "\u5403"
- "\u66A2"
- "\u5B54"
- "\u5EB8"
- "\u5DF3"
- "\u589C"
- "\u85AE"
- "\u6101"
- "\u664B"
- "\u8236"
- "\u8FC5"
- "\u6B3A"
- "\u9640"
- "\u7709"
- "\u6CC4"
- "\u59FB"
- "\u9688"
- "\u58CC"
- "\u69D9"
- "\u5E87"
- "\u52D2"
- "\u6E07"
- "\u91E7"
- "\u4E43"
- "\u82D4"
- "\u9306"
- "\u58D5"
- "\u78D0"
- "\u6962"
- "\u65A7"
- "\u5E63"
- "\u03B7"
- "\u7E55"
- "\u83C5"
- "\u7109"
- "\u5112"
- "\u5D07"
- "\u8276"
- "\u5449"
- "\u7984"
- "\u54C9"
- "\u68AF"
- "\u5937"
- "\u546A"
- "\u56C3"
- "\u84BC"
- "\u9A28"
- "\u9D3B"
- "\u862D"
- "\u7CA5"
- "\u7D3A"
- "\u7D17"
- "\u7164"
- "\u03C9"
- "\u52FE"
- "\u97A0"
- "\u4F3D"
- "\u7AAE"
- "\u6E15"
- "\u0392"
- "\u8D66"
- "\u6597"
- "\u66F9"
- "\u8CE0"
- "\u5CAC"
- "\u847A"
- "\u7D33"
- "\u5B8D"
- "\u6191"
- "\u6357"
- "\u7C9B"
- "\u8CCA"
- "\u9F8D"
- "\u81C6"
- "\u6C8C"
- "\u52C5"
- "\u8096"
- "\u559D"
- "\u8CAA"
- "\u82AD"
- "\u8549"
- "\u919C"
- "\u64B9"
- "\u5740"
- "\u7BE0"
- "\u7D2C"
- "\u75B1"
- "\u52F2"
- "\u86FE"
- "\u88B4"
- "\u8749"
- "\u685F"
- "\u4FF5"
- "\u818F"
- "\u5DF7"
- "\u5072"
- "\u6148"
- "\u754F"
- "\u96BB"
- "\u606D"
- "\u64B0"
- "\u9D0E"
- "\u52AB"
- "\u63C6"
- "\u914E"
- "\u8106"
- "\u6241"
- "\u9761"
- "\u8511"
- "\u95CA"
- "\u96BC"
- "\u6CCC"
- "\u5996"
- "\u65A1"
- "\u52C3"
- "\u637B"
- "\u6E13"
- "\u937E"
- "\u5954"
- "\u6155"
- "\u5984"
- "\u6A0B"
- "\u936C"
- "\u502D"
- "\u8679"
- "\u03BD"
- "\u60A6"
- "\u8151"
- "\u62EE"
- "\u51E0"
- "\u80E1"
- "\u8FC2"
- "\u8EAF"
- "\u50ED"
- "\u6ECB"
- "\u7B8B"
- "\u75F0"
- "\u65AC"
- "\u85AB"
- "\u673D"
- "\u82A5"
- "\u9756"
- "\u907C"
- "\u6591"
- "\u7953"
- "\u5B95"
- "\u976D"
- "\u72D7"
- "\u81BF"
- "\u59AC"
- "\u5A7F"
- "\u7554"
- "\u7AEA"
- "\u9D5C"
- "\u8CE6"
- "\u7E1E"
- "\u6731"
- "\u7C95"
- "\u69FB"
- "\u6D69"
- "\u511A"
- "\u8CDC"
- "\u8B39"
- "\u68B5"
- "\u5A9B"
- "\u7947"
- "\u5516"
- "\u03C8"
- "\u03C1"
- "\u5A9A"
- "\u540E"
- "\u6FB1"
- "\u7DBE"
- "\u6372"
- "\u67E9"
- "\u6DF3"
- "\u74DC"
- "\u5631"
- "\u51B4"
- "\u6115"
- "\u9211"
- "\u51B6"
- "\u67A2"
- "\u03A9"
- "\u77B0"
- "\u6775"
- "\u5EB5"
- "\u4F2F"
- "\u840C"
- "\u5609"
- "\u4FC4"
- "\u7D06"
- "\u81A0"
- "\u7252"
- "\u8EB0"
- "\u543E"
- "\u50FB"
- "\u704C"
- "\u646F"
- "\u5091"
- "\u929A"
- "\u8B90"
- "\u8910"
- "\u8FB1"
- "\u7345"
- "\u7B94"
- "\u73A9"
- "\u4F43"
- "\u583A"
- "\u5504"
- "\u515C"
- "\u62CC"
- "\u5751"
- "\u75D8"
- "\u69CC"
- "\u77B3"
- "\u79BF"
- "\u66D9"
- "\u5DF2"
- "\u7FC1"
- "\u5C3C"
- "\u60BC"
- "\u7F77"
- "\u699C"
- "\u5451"
- "\u79E6"
- "\u533F"
- "\u03BA"
- "\u7259"
- "\u4F46"
- "\u572D"
- "\u548E"
- "\u745E"
- "\u7A1C"
- "\u785D"
- "\u6BC5"
- "\u7015"
- "\u8702"
- "\u978D"
- "\u6A2B"
- "\u7566"
- "\u660F"
- "\u755D"
- "\u4FAE"
- "\u548B"
- "\u6367"
- "\u7F9E"
- "\u803D"
- "\u60B8"
- "\u51E7"
- "\u4EAE"
- "\u9AC4"
- "\u54FA"
- "\u4FEF"
- "\u567A"
- "\u8058"
- "\u8654"
- "\u5B8B"
- "\u93A7"
- "\u968B"
- "\u51B3"
- "\u59D1"
- "\u7078"
- "\u927E"
- "\u8F5F"
- "\u60F0"
- "\u03C7"
- "\u643E"
- "\u6854"
- "\u7F6B"
- "\u8E4A"
- "\u68B6"
- "\u6893"
- "\u7F75"
- "\u65A5"
- "\u6276"
- "\u6147"
- "\u61C3"
- "\u9949"
- "\u6E25"
- "\u6AD3"
- "\u80E4"
- "\u56A2"
- "\u9CF3"
- "\u6A84"
- "\u8C79"
- "\u50B2"
- "\u50D1"
- "\u7586"
- "\u6134"
- "\u53A8"
- "\u6FB9"
- "\u9320"
- "\u64E2"
- "\u6EBA"
- "\u7624"
- "\u73CA"
- "\u5BC5"
- "\u6977"
- "\u9583"
- "\u9CF6"
- "\u7119"
- "\u6912"
- "\u9B4F"
- "\u9798"
- "\u68A2"
- "\u6900"
- "\u8ACC"
- "\u696B"
- "\u5F14"
- "\u65D2"
- "\u5957"
- "\u9F5F"
- "\u9F6C"
- "\u7D18"
- "\u810A"
- "\u536F"
- "\u727D"
- "\u6BD8"
- "\u6714"
- "\u514E"
- "\u721B"
- "\u6D9C"
- "\u5851"
- "\u5F04"
- "\u676D"
- "\u63A0"
- "\u80B4"
- "\u626E"
- "\u51F1"
- "\u798D"
- "\u8036"
- "\u808B"
- "\u7235"
- "\u61AB"
- "\u57D3"
- "\u5983"
- "\u9910"
- "\u7C7E"
- "\u7262"
- "\u6816"
- "\u9017"
- "\u7058"
- "\u5E5F"
- "\u68F2"
- "\u5687"
- "\u7827"
- "\u6E1A"
- "\u7C9F"
- "\u7A7F"
- "\u7F60"
- "\u68F9"
- "\u8594"
- "\u8587"
- "\u526A"
- "\u7B48"
- "\u936E"
- "\u892A"
- "\u7AA9"
- "\u58F1"
- "\u30F2"
- "\u7460"
- "\u7483"
- "\u61BE"
- "\u5E16"
- "\u6960"
- "\u03B5"
- "\u5480"
- "\u56BC"
- "\u56A5"
- "\u6D29"
- "\u6A58"
- "\u6867"
- "\u6A9C"
- "\u63F6"
- "\u63C4"
- "\u88E1"
- "\u6A80"
- "\u900D"
- "\u9081"
- "\u6028"
- "\u73B2"
- "\u90C1"
- "\u5815"
- "\u8AB9"
- "\u8B17"
- "\u8956"
- "\u51F0"
- "\u9B41"
- "\u5B75"
- "\u7766"
- "\u71FB"
- "\u5243"
- "\u53A9"
- "\u71D7"
- "\u84D1"
- "\u5EFB"
- "\u75D4"
- "\u837C"
- "\u6190"
- "\u6070"
- "\u8F9F"
- "\u5F98"
- "\u5F8A"
- "\u4FA0"
- "\u5830"
- "\u971C"
- "\u809B"
- "\u76E7"
- "\u5835"
- "\u72DB"
- "\u9D8F"
- "\u9119"
- "\u4F73"
- "\u916A"
- "\u8AE7"
- "\u6973"
- "\u7826"
- "\u5AC9"
- "\u5DEB"
- "\u53E1"
- "\u9716"
- "\u6E23"
- "\u5544"
- "\u798E"
- "\u6CAB"
- "\u821F"
- "\u6C5D"
- "\u5302"
- "\u99F1"
- "\u6C08"
- "\u308E"
- "\u714C"
- "\u7DAC"
- "\u5F1B"
- "\u586B"
- "\u84C1"
- "\u5039"
- "\u7CFE"
- "\u51A5"
- "\u674E"
- "\u966A"
- "\u8877"
- "\u59E6"
- "\u5962"
- "\u75BC"
- "\u8A54"
- "\u8599"
- "\u8B5A"
- "\u5CEF"
- "\u684E"
- "\u688F"
- "\u9B92"
- "\u8A1B"
- "\u55B0"
- "\u7960"
- "\u67A1"
- "\u6681"
- "\u4E5E"
- "\u91C7"
- "\u9739"
- "\u9742"
- "\u687F"
- "\u929C"
- "\u4F51"
- "\u79BE"
- "\u5944"
- "\u6930"
- "\u87F9"
- "\u8061"
- "\u98AF"
- "\u30C2"
- "\u8E81"
- "\u8E42"
- "\u8E99"
- "\u8695"
- "\u693F"
- "\u62F7"
- "\u9257"
- "\u8882"
- "\u78CB"
- "\u7422"
- "\u6B3D"
- "\u60B6"
- "\u53C9"
- "\u7E37"
- "\u8A36"
- "\u50C5"
- "\u5C6F"
- "\u5EEC"
- "\u5C41"
- "\u99A8"
- "\u6E20"
- "\u8568"
- "\u699B"
- "\u675C"
- "\u7791"
- "\u6A8E"
- "\u8ECB"
- "\u8F62"
- "\u8700"
- "\u8235"
- "\u82B9"
- "\u6B3E"
- "\u639F"
- "\u8E2A"
- "\u745A"
- "\u71E6"
- "\u7D21"
- "\u584A"
- "\u8171"
- "\u6753"
- "\u65A4"
- "\u786F"
- "\u55AC"
- "\u8B04"
- "\u79DF"
- "\u8180"
- "\u80F1"
- "\u6EC4"
- "\u9C10"
- "\u8475"
- "\u8471"
- "\u8461"
- "\u5A49"
- "\u88D4"
- "\u9F0E"
- "\u9187"
- "\u67EF"
- "\u991E"
- "\u96C1"
- "\u8AA6"
- "\u8A62"
- "\u633A"
- "\u7AFA"
- "\u8A82"
- "\u5191"
- "\u8718"
- "\u86DB"
- "\u70B8"
- "\u932B"
- "\u58C5"
- "\u8087"
- "\u54AC"
- "\u9B8E"
- "\u67D1"
- "\u7D9C"
- "\u5BE1"
- "\u7977"
- "\u522E"
- "\u8CCE"
- "\u9B18"
- "\u884D"
- "\u5FD6"
- "\u685D"
- "\u0398"
- "\u039A"
- "\u03A8"
- "\u53E2"
- "\u4FCE"
- "\u7396"
- "\u78A7"
- "\u8766"
- "\u8521"
- "\u649A"
- "\u7A14"
- "\u752B"
- "\u6D35"
- "\u7893"
- "\u9ECE"
- "\u5AE1"
- "\u8755"
- "\u725F"
- "\u6B89"
- "\u6C83"
- "\u7B50"
- "\u619A"
- "\u6E24"
- "\u9B4D"
- "\u9B4E"
- "\u71ED"
- "\u7940"
- "\u6D1B"
- "\u88F3"
- "\u4E11"
- "\u9846"
- "\u9952"
- "\u5EC9"
- "\u689F"
- "\u848B"
- "\u6DD1"
- "\u8737"
- "\u9644"
- "\u695A"
- "\u9F20"
- "\u5154"
- "\u61AC"
- "\u5F57"
- "\u66FC"
- "\u5D11"
- "\u57DC"
- "\u5F77"
- "\u5F7F"
- "\u5DF4"
- "\u831C"
- "\u6D9B"
- "\u57E0"
- "\u945A"
- "\u92D2"
- "\u5C09"
- "\u53AD"
- "\u7B75"
- "\u7AE3"
- "\u7E8F"
- "\u6194"
- "\u60B4"
- "\u8E5F"
- "\u675E"
- "\u7825"
- "\u8F14"
- "\u9C52"
- "\u4FAF"
- "\u7D62"
- "\u5475"
- "\u698E"
- "\u53EA"
- "\u71D5"
- "\u5C60"
- "\u5614"
- "\u74E2"
- "\u9291"
- "\u880D"
- "\u932C"
- "\u608C"
- "\u8A1D"
- "\u7DB8"
- "\u530D"
- "\u5310"
- "\u637A"
- "\u6A59"
- "\u5BB5"
- "\u9D60"
- "\u57F4"
- "\u7690"
- "\u9021"
- "\u4FF8"
- "\u7A63"
- "\u54A4"
- "\u8309"
- "\u8389"
- "\u6643"
- "\u6EF8"
- "\u5289"
- "\u5026"
- "\u8944"
- "\u7B4D"
- "\u5239"
- "\u83BD"
- "\u9041"
- "\u66F5"
- "\u79BD"
- "\u7B67"
- "\u7E0A"
- "\u7FD4"
- "\u5BF5"
- "\u834F"
- "\u758B"
- "\u84EC"
- "\u83B1"
- "\u8EAC"
- "\u696E"
- "\u76C8"
- "\u5C13"
- "\u72FC"
- "\u85C9"
- "\u965F"
- "\u620E"
- "\u4E8E"
- "\u6F58"
- "\u8012"
- "\u5F82"
- "\u5FA0"
- "\u99AE"
- "\u5F6D"
- "\u5E47"
- "\u9087"
- "\u6CD3"
- "\u80B1"
- "\u65BC"
- "\u6602"
- "\u8E64"
- "\u7463"
- "\u9A65"
- "\u4EA8"
- "\u8AEE"
- "\u77EE"
- "\u8569"
- "\u6566"
- "\u30EE"
- "\u6208"
- "\u8229"
- "\u9B6F"
- "\u65E0"
- "\u6159"
- "\u6127"
- "\u8340"
- "\u6309"
- "\u914B"
- "\u59F6"
- "\u723E"
- "\u8602"
- "\u986B"
- "\u593E"
- "\u59DA"
- "\u701D"
- "\u6FD8"
- "\u964B"
- "\u777E"
- "\u5B30"
- "\u5DBA"
- "\u821B"
- "\u7B65"
- "\u95A4"
- "\u68D8"
- "\u9812"
- "\u59BE"
- "\u8B2C"
- "\u4F0D"
- "\u537F"
- "\u8FEA"
- "\u5686"
- "\u60F9"
- "\u80DA"
- "\u6C6A"
- "\u543B"
- "\u9B51"
- "\u8F3B"
- "\u59C6"
- "\u84FC"
- "\u6AC2"
- "\u5315"
- "\u4F70"
- "\u7246"
- "\u5CD9"
- "\u725D"
- "\u9DF2"
- "\u7DCB"
- "\u7BAD"
- "\u82EB"
- "\u5366"
- "\u5B5F"
- "\u5323"
- "\u4ED4"
- "\u5D19"
- "\u6787"
- "\u6777"
- "\u81C0"
- "\u681E"
- "\u9E1E"
- "\u61FA"
- "\u55DA"
- "\u6DB8"
- "\u30C5"
- "\u8D16"
- "\u5E9A"
- "\u93D1"
- "\u9149"
- "\u670B"
- "\u70F9"
- "\u53C8"
- "\u7337"
- "\u7C00"
- "\u5B2C"
- "\u88B7"
- "\u6BB7"
- "\u51DB"
- "\u4EC0"
- "\u71FF"
- "\u5556"
- "\u7BC6"
- "\u7DD8"
- "\u5036"
- "\u6AC3"
- "\u8A03"
- "\u540F"
- "\u5CB1"
- "\u8A25"
- "\u958F"
- "\u5DBD"
- "\u722C"
- "\u618A"
- "\u7511"
- "\u6144"
- "\u5E25"
- "\u7704"
- "\u5A11"
- "\u50E5"
- "\u5016"
- "\u800C"
- "\u8F4D"
- "\u5583"
- "\u81BE"
- "\u7099"
- "\u85AF"
- "\u97EE"
- "\u4E99"
- "\u8B14"
- "\u86CE"
- "\u7425"
- "\u73C0"
- "\u698A"
- "\u7C3E"
- "\u8D6D"
- "\u8823"
- "\u8299"
- "\u8B01"
- "\u9022"
- "\u8466"
- "\u6670"
- "\u5398"
- "\u707C"
- "\u903C"
- "\u9328"
- "\u700B"
- "\u5FF8"
- "\u6029"
- "\u7165"
- "\u7B0F"
- "\u5FFD"
- "\u7708"
- "\u7DEC"
- "\u5C4D"
- "\u75BD"
- "\u6E5B"
- "\u788D"
- "\u8AE4"
- <sos/eos>
init: xavier_uniform
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_sp/train/feats_stats.npz
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d6
normalize_before: true
macaron_style: false
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili | mbeukman | 2021-11-25T09:05:15Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"sw",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- sw
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
---
# xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-yoruba](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Swahili part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, highΒ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a personβs name right after another personβs name
I-PER |Personβs name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili) (This model) | [yor](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) | swa | 87.73 | 86.67 | 88.80 | 85.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | swa | 88.36 | 86.95 | 89.82 | 86.00 | 91.00 | 77.00 | 94.00 |
| [xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | swa | 87.75 | 86.55 | 88.97 | 85.00 | 92.00 | 77.00 | 91.00 |
| [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | swa | 88.93 | 87.64 | 90.25 | 83.00 | 92.00 | 79.00 | 95.00 |
| [xlm-roberta-base-finetuned-luo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | swa | 87.93 | 86.91 | 88.97 | 83.00 | 91.00 | 76.00 | 94.00 |
| [xlm-roberta-base-finetuned-naija-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-swahili) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | swa | 90.36 | 88.59 | 92.20 | 86.00 | 93.00 | 79.00 | 96.00 |
| [xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | swa | 87.80 | 86.50 | 89.14 | 86.00 | 90.00 | 78.00 | 93.00 |
| [xlm-roberta-base-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-swahili) | [base](https://huggingface.co/xlm-roberta-base) | swa | 88.71 | 86.84 | 90.67 | 83.00 | 91.00 | 79.00 | 95.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-wolof | mbeukman | 2021-11-25T09:05:13Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"wo",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- wo
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "SAFIYETU BΓEY CΓ©y Koronaa !"
---
# xlm-roberta-base-finetuned-wolof-finetuned-ner-wolof
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-wolof](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Wolof part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, highΒ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a personβs name right after another personβs name
I-PER |Personβs name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-wolof-finetuned-ner-wolof](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-wolof) (This model) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | wol | 69.02 | 67.60 | 70.51 | 30.00 | 84.00 | 44.00 | 71.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-wolof](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-wolof) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | wol | 69.01 | 73.25 | 65.23 | 27.00 | 85.00 | 52.00 | 67.00 |
| [xlm-roberta-base-finetuned-ner-wolof](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-wolof) | [base](https://huggingface.co/xlm-roberta-base) | wol | 66.12 | 69.46 | 63.09 | 30.00 | 84.00 | 54.00 | 59.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-wolof'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "SAFIYETU BΓEY CΓ©y Koronaa !"
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-wolof | mbeukman | 2021-11-25T09:05:05Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"wo",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- wo
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "SAFIYETU BΓEY CΓ©y Koronaa !"
---
# xlm-roberta-base-finetuned-swahili-finetuned-ner-wolof
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-swahili](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Wolof part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, highΒ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a personβs name right after another personβs name
I-PER |Personβs name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-wolof](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-wolof) (This model) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | wol | 69.01 | 73.25 | 65.23 | 27.00 | 85.00 | 52.00 | 67.00 |
| [xlm-roberta-base-finetuned-wolof-finetuned-ner-wolof](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-wolof) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | wol | 69.02 | 67.60 | 70.51 | 30.00 | 84.00 | 44.00 | 71.00 |
| [xlm-roberta-base-finetuned-ner-wolof](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-wolof) | [base](https://huggingface.co/xlm-roberta-base) | wol | 66.12 | 69.46 | 63.09 | 30.00 | 84.00 | 54.00 | 59.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-wolof'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "SAFIYETU BΓEY CΓ©y Koronaa !"
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-naija | mbeukman | 2021-11-25T09:05:00Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"pcm",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- pcm
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Mixed Martial Arts joinbodi , Ultimate Fighting Championship , UFC don decide say dem go enta back di octagon on Saturday , 9 May , for Jacksonville , Florida ."
---
# xlm-roberta-base-finetuned-swahili-finetuned-ner-naija
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-swahili](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Nigerian Pidgin part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, highΒ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a personβs name right after another personβs name
I-PER |Personβs name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-naija](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-naija) (This model) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | pcm | 89.12 | 87.84 | 90.42 | 90.00 | 89.00 | 82.00 | 94.00 |
| [xlm-roberta-base-finetuned-naija-finetuned-ner-naija](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-naija) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | pcm | 88.06 | 87.04 | 89.12 | 90.00 | 88.00 | 81.00 | 92.00 |
| [xlm-roberta-base-finetuned-ner-naija](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-naija) | [base](https://huggingface.co/xlm-roberta-base) | pcm | 88.89 | 88.13 | 89.66 | 92.00 | 87.00 | 82.00 | 94.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-naija'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Mixed Martial Arts joinbodi , Ultimate Fighting Championship , UFC don decide say dem go enta back di octagon on Saturday , 9 May , for Jacksonville , Florida ."
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luganda | mbeukman | 2021-11-25T09:04:55Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"lug",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- lug
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Empaka zaakubeera mu kibuga Liverpool e Bungereza , okutandika nga July 12 ."
---
# xlm-roberta-base-finetuned-swahili-finetuned-ner-luganda
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-swahili](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the luganda part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, highΒ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a personβs name right after another personβs name
I-PER |Personβs name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-luganda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luganda) (This model) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | lug | 82.57 | 80.38 | 84.89 | 75.00 | 80.00 | 82.00 | 87.00 |
| [xlm-roberta-base-finetuned-luganda-finetuned-ner-luganda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-luganda) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | lug | 85.37 | 82.75 | 88.17 | 78.00 | 82.00 | 80.00 | 92.00 |
| [xlm-roberta-base-finetuned-ner-luganda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-luganda) | [base](https://huggingface.co/xlm-roberta-base) | lug | 80.91 | 78.59 | 83.37 | 73.00 | 78.00 | 77.00 | 86.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luganda'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Empaka zaakubeera mu kibuga Liverpool e Bungereza , okutandika nga July 12 ."
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda | mbeukman | 2021-11-25T09:04:53Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"rw",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- rw
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Ambasaderi wa EU mu Rwanda , Nicola Bellomo yagize ati β Inkunga yacu ni imwe mu nkunga yagutse yiswe # TeamEurope ."
---
# xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-swahili](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Kinyarwanda part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, highΒ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a personβs name right after another personβs name
I-PER |Personβs name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda) (This model) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | kin | 76.31 | 72.64 | 80.37 | 70.00 | 76.00 | 75.00 | 84.00 |
| [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-kinyarwanda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-kinyarwanda) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | kin | 79.55 | 75.56 | 83.99 | 69.00 | 79.00 | 77.00 | 90.00 |
| [xlm-roberta-base-finetuned-ner-kinyarwanda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-kinyarwanda) | [base](https://huggingface.co/xlm-roberta-base) | kin | 74.59 | 72.17 | 77.17 | 70.00 | 75.00 | 70.00 | 82.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Ambasaderi wa EU mu Rwanda , Nicola Bellomo yagize ati β Inkunga yacu ni imwe mu nkunga yagutse yiswe # TeamEurope ."
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-ner-wolof | mbeukman | 2021-11-25T09:04:43Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"wo",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- wo
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "SAFIYETU BΓEY CΓ©y Koronaa !"
---
# xlm-roberta-base-finetuned-ner-wolof
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Wolof part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, highΒ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a personβs name right after another personβs name
I-PER |Personβs name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-ner-wolof](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-wolof) (This model) | [base](https://huggingface.co/xlm-roberta-base) | wol | 66.12 | 69.46 | 63.09 | 30.00 | 84.00 | 54.00 | 59.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-wolof](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-wolof) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | wol | 69.01 | 73.25 | 65.23 | 27.00 | 85.00 | 52.00 | 67.00 |
| [xlm-roberta-base-finetuned-wolof-finetuned-ner-wolof](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-wolof) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | wol | 69.02 | 67.60 | 70.51 | 30.00 | 84.00 | 44.00 | 71.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-wolof'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "SAFIYETU BΓEY CΓ©y Koronaa !"
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-ner-naija | mbeukman | 2021-11-25T09:04:38Z | 31 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"pcm",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- pcm
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Mixed Martial Arts joinbodi , Ultimate Fighting Championship , UFC don decide say dem go enta back di octagon on Saturday , 9 May , for Jacksonville , Florida ."
---
# xlm-roberta-base-finetuned-ner-naija
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Nigerian Pidgin part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, highΒ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a personβs name right after another personβs name
I-PER |Personβs name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-ner-naija](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-naija) (This model) | [base](https://huggingface.co/xlm-roberta-base) | pcm | 88.89 | 88.13 | 89.66 | 92.00 | 87.00 | 82.00 | 94.00 |
| [xlm-roberta-base-finetuned-naija-finetuned-ner-naija](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-naija) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | pcm | 88.06 | 87.04 | 89.12 | 90.00 | 88.00 | 81.00 | 92.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-naija](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-naija) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | pcm | 89.12 | 87.84 | 90.42 | 90.00 | 89.00 | 82.00 | 94.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-naija'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Mixed Martial Arts joinbodi , Ultimate Fighting Championship , UFC don decide say dem go enta back di octagon on Saturday , 9 May , for Jacksonville , Florida ."
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-ner-kinyarwanda | mbeukman | 2021-11-25T09:04:30Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"rw",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- rw
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Ambasaderi wa EU mu Rwanda , Nicola Bellomo yagize ati β Inkunga yacu ni imwe mu nkunga yagutse yiswe # TeamEurope ."
---
# xlm-roberta-base-finetuned-ner-kinyarwanda
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Kinyarwanda part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, highΒ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a personβs name right after another personβs name
I-PER |Personβs name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-ner-kinyarwanda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-kinyarwanda) (This model) | [base](https://huggingface.co/xlm-roberta-base) | kin | 74.59 | 72.17 | 77.17 | 70.00 | 75.00 | 70.00 | 82.00 |
| [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-kinyarwanda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-kinyarwanda) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | kin | 79.55 | 75.56 | 83.99 | 69.00 | 79.00 | 77.00 | 90.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | kin | 76.31 | 72.64 | 80.37 | 70.00 | 76.00 | 75.00 | 84.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-kinyarwanda'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Ambasaderi wa EU mu Rwanda , Nicola Bellomo yagize ati β Inkunga yacu ni imwe mu nkunga yagutse yiswe # TeamEurope ."
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-ner-igbo | mbeukman | 2021-11-25T09:04:28Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"ig",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- ig
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Ike α»da jα»₯α»₯ otα»₯ nkeji banyere oke ogbugbu na - eme n'ala Naijiria agwα»₯la Ekweremmadα»₯"
---
# xlm-roberta-base-finetuned-ner-igbo
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Igbo part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, highΒ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a personβs name right after another personβs name
I-PER |Personβs name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-ner-igbo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-igbo) (This model) | [base](https://huggingface.co/xlm-roberta-base) | ibo | 86.06 | 85.20 | 86.94 | 76.00 | 86.00 | 90.00 | 87.00 |
| [xlm-roberta-base-finetuned-igbo-finetuned-ner-igbo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-igbo) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | ibo | 88.39 | 87.08 | 89.74 | 74.00 | 91.00 | 90.00 | 91.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-igbo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-igbo) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | ibo | 84.93 | 83.63 | 86.26 | 70.00 | 88.00 | 89.00 | 84.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-igbo'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Ike α»da jα»₯α»₯ otα»₯ nkeji banyere oke ogbugbu na - eme n'ala Naijiria agwα»₯la Ekweremmadα»₯"
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-ner-hausa | mbeukman | 2021-11-25T09:04:25Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"ha",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- ha
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "A saurari cikakken rahoton wakilin Muryar Amurka Ibrahim Abdul'aziz"
---
# xlm-roberta-base-finetuned-ner-hausa
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Hausa part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, highΒ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a personβs name right after another personβs name
I-PER |Personβs name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-ner-hausa](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-hausa) (This model) | [base](https://huggingface.co/xlm-roberta-base) | hau | 89.94 | 87.74 | 92.25 | 84.00 | 94.00 | 74.00 | 93.00 |
| [xlm-roberta-base-finetuned-hausa-finetuned-ner-hausa](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-hausa) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | hau | 92.27 | 90.46 | 94.16 | 85.00 | 95.00 | 80.00 | 97.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-hausa](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-hausa) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | hau | 89.14 | 87.18 | 91.20 | 82.00 | 93.00 | 76.00 | 93.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-hausa'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "A saurari cikakken rahoton wakilin Muryar Amurka Ibrahim Abdul'aziz"
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili | mbeukman | 2021-11-25T09:04:18Z | 7 | 1 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"sw",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- sw
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
---
# xlm-roberta-base-finetuned-luo-finetuned-ner-swahili
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Swahili part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, highΒ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a personβs name right after another personβs name
I-PER |Personβs name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-luo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili) (This model) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | swa | 87.93 | 86.91 | 88.97 | 83.00 | 91.00 | 76.00 | 94.00 |
| [xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | swa | 88.36 | 86.95 | 89.82 | 86.00 | 91.00 | 77.00 | 94.00 |
| [xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | swa | 87.75 | 86.55 | 88.97 | 85.00 | 92.00 | 77.00 | 91.00 |
| [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | swa | 88.93 | 87.64 | 90.25 | 83.00 | 92.00 | 79.00 | 95.00 |
| [xlm-roberta-base-finetuned-naija-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-swahili) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | swa | 90.36 | 88.59 | 92.20 | 86.00 | 93.00 | 79.00 | 96.00 |
| [xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | swa | 87.80 | 86.50 | 89.14 | 86.00 | 90.00 | 78.00 | 93.00 |
| [xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili) | [yor](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) | swa | 87.73 | 86.67 | 88.80 | 85.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-swahili) | [base](https://huggingface.co/xlm-roberta-base) | swa | 88.71 | 86.84 | 90.67 | 83.00 | 91.00 | 79.00 | 95.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-luo | mbeukman | 2021-11-25T09:04:15Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"luo",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- luo
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "ο»ΏJii 2 moko jowito ngimagi ka machielo 1 to ohinyore marach mokalo e masira makoch mar apaya mane otimore e apaya mawuok Oyugis kochimo Chabera e sub county ma Rachuonyo East e County ma Homa Bay ewii odhiambo makawuononi"
---
# xlm-roberta-base-finetuned-luo-finetuned-ner-luo
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Luo part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, highΒ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a personβs name right after another personβs name
I-PER |Personβs name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-luo-finetuned-ner-luo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-luo) (This model) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | luo | 78.71 | 78.91 | 78.52 | 72.00 | 84.00 | 59.00 | 87.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-luo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luo) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | luo | 78.13 | 77.75 | 78.52 | 65.00 | 82.00 | 61.00 | 89.00 |
| [xlm-roberta-base-finetuned-ner-luo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-luo) | [base](https://huggingface.co/xlm-roberta-base) | luo | 75.99 | 76.18 | 75.80 | 71.00 | 76.00 | 62.00 | 85.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-luo'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "ο»ΏJii 2 moko jowito ngimagi ka machielo 1 to ohinyore marach mokalo e masira makoch mar apaya mane otimore e apaya mawuok Oyugis kochimo Chabera e sub county ma Rachuonyo East e County ma Homa Bay ewii odhiambo makawuononi"
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili | mbeukman | 2021-11-25T09:04:12Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"sw",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- sw
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
---
# xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-luganda](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Swahili part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, highΒ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a personβs name right after another personβs name
I-PER |Personβs name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili) (This model) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | swa | 88.93 | 87.64 | 90.25 | 83.00 | 92.00 | 79.00 | 95.00 |
| [xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | swa | 88.36 | 86.95 | 89.82 | 86.00 | 91.00 | 77.00 | 94.00 |
| [xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | swa | 87.75 | 86.55 | 88.97 | 85.00 | 92.00 | 77.00 | 91.00 |
| [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-luo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | swa | 87.93 | 86.91 | 88.97 | 83.00 | 91.00 | 76.00 | 94.00 |
| [xlm-roberta-base-finetuned-naija-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-swahili) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | swa | 90.36 | 88.59 | 92.20 | 86.00 | 93.00 | 79.00 | 96.00 |
| [xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | swa | 87.80 | 86.50 | 89.14 | 86.00 | 90.00 | 78.00 | 93.00 |
| [xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili) | [yor](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) | swa | 87.73 | 86.67 | 88.80 | 85.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-swahili) | [base](https://huggingface.co/xlm-roberta-base) | swa | 88.71 | 86.84 | 90.67 | 83.00 | 91.00 | 79.00 | 95.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili | mbeukman | 2021-11-25T09:04:02Z | 6 | 1 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"sw",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- sw
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
---
# xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-igbo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Swahili part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, highΒ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a personβs name right after another personβs name
I-PER |Personβs name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili) (This model) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | swa | 87.75 | 86.55 | 88.97 | 85.00 | 92.00 | 77.00 | 91.00 |
| [xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | swa | 88.36 | 86.95 | 89.82 | 86.00 | 91.00 | 77.00 | 94.00 |
| [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | swa | 88.93 | 87.64 | 90.25 | 83.00 | 92.00 | 79.00 | 95.00 |
| [xlm-roberta-base-finetuned-luo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | swa | 87.93 | 86.91 | 88.97 | 83.00 | 91.00 | 76.00 | 94.00 |
| [xlm-roberta-base-finetuned-naija-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-swahili) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | swa | 90.36 | 88.59 | 92.20 | 86.00 | 93.00 | 79.00 | 96.00 |
| [xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | swa | 87.80 | 86.50 | 89.14 | 86.00 | 90.00 | 78.00 | 93.00 |
| [xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili) | [yor](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) | swa | 87.73 | 86.67 | 88.80 | 85.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-swahili) | [base](https://huggingface.co/xlm-roberta-base) | swa | 88.71 | 86.84 | 90.67 | 83.00 | 91.00 | 79.00 | 95.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-igbo | mbeukman | 2021-11-25T09:04:00Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"ig",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- ig
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Ike α»da jα»₯α»₯ otα»₯ nkeji banyere oke ogbugbu na - eme n'ala Naijiria agwα»₯la Ekweremmadα»₯"
---
# xlm-roberta-base-finetuned-igbo-finetuned-ner-igbo
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-igbo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Igbo part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, highΒ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a personβs name right after another personβs name
I-PER |Personβs name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-igbo-finetuned-ner-igbo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-igbo) (This model) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | ibo | 88.39 | 87.08 | 89.74 | 74.00 | 91.00 | 90.00 | 91.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-igbo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-igbo) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | ibo | 84.93 | 83.63 | 86.26 | 70.00 | 88.00 | 89.00 | 84.00 |
| [xlm-roberta-base-finetuned-ner-igbo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-igbo) | [base](https://huggingface.co/xlm-roberta-base) | ibo | 86.06 | 85.20 | 86.94 | 76.00 | 86.00 | 90.00 | 87.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-igbo'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Ike α»da jα»₯α»₯ otα»₯ nkeji banyere oke ogbugbu na - eme n'ala Naijiria agwα»₯la Ekweremmadα»₯"
ner_results = nlp(example)
print(ner_results)
```
|
DiegoAlysson/opus-mt-en-ro-finetuned-en-to-ro | DiegoAlysson | 2021-11-25T03:08:55Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-ro-finetuned-en-to-ro
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 27.9273
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2915
- Bleu: 27.9273
- Gen Len: 34.0935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.7448 | 1.0 | 38145 | 1.2915 | 27.9273 | 34.0935 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
arnolfokam/bert-base-uncased-pcm | arnolfokam | 2021-11-24T21:14:03Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"pcm",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- pcm
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
license: apache-2.0
widget:
- text: "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."
---
# Model description
**bert-base-uncased-pcm** is a model based on the fine-tuned BERT base uncased model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Nigerian Pidgin corpus **(pcm)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(pcm)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**bert-base-uncased-pcm**| 88.61 | 84.17 | 86.33
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/bert-base-uncased-pcm")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/bert-base-uncased-pcm")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."
ner_results = nlp(example)
print(ner_results)
``` |
castorini/monot5-large-msmarco-10k | castorini | 2021-11-24T19:15:14Z | 149 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | This model is a T5-large reranker fine-tuned on the MS MARCO passage dataset for 10k steps (or 1 epoch).
This model usually has a better zero-shot performance than `monot5-large-msmarco`, i.e., it performs better on datasets different from MS MARCO.
For more details on how to use it, check the following links:
- [A simple reranking example](https://github.com/castorini/pygaggle#a-simple-reranking-example)
- [Rerank MS MARCO passages](https://github.com/castorini/pygaggle/blob/master/docs/experiments-msmarco-passage-subset.md)
- [Rerank Robust04 documents](https://github.com/castorini/pygaggle/blob/master/docs/experiments-robust04-monot5-gpu.md)
Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/) |
Plimpton/distilbert-base-uncased-finetuned-squad | Plimpton | 2021-11-24T17:15:45Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5169 | 1.0 | 1642 | 1.6958 |
| 1.1326 | 2.0 | 3284 | 2.0009 |
| 0.8638 | 3.0 | 4926 | 2.4285 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
AdapterHub/roberta-base-pf-yelp_polarity | AdapterHub | 2021-11-24T16:33:21Z | 1 | 0 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"roberta",
"en",
"dataset:yelp_polarity",
"arxiv:2104.08247",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- text-classification
- roberta
- adapter-transformers
datasets:
- yelp_polarity
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-yelp_polarity` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [yelp_polarity](https://huggingface.co/datasets/yelp_polarity/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-yelp_polarity", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-wnut_17 | AdapterHub | 2021-11-24T16:33:15Z | 5 | 0 | adapter-transformers | [
"adapter-transformers",
"token-classification",
"roberta",
"en",
"dataset:wnut_17",
"arxiv:2104.08247",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
tags:
- token-classification
- roberta
- adapter-transformers
datasets:
- wnut_17
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-wnut_17` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [wnut_17](https://huggingface.co/datasets/wnut_17/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-wnut_17", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-ud_pos | AdapterHub | 2021-11-24T16:32:48Z | 4 | 0 | adapter-transformers | [
"adapter-transformers",
"token-classification",
"roberta",
"adapterhub:pos/ud_ewt",
"en",
"dataset:universal_dependencies",
"arxiv:2104.08247",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
tags:
- token-classification
- roberta
- adapterhub:pos/ud_ewt
- adapter-transformers
datasets:
- universal_dependencies
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-ud_pos` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [pos/ud_ewt](https://adapterhub.ml/explore/pos/ud_ewt/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-ud_pos", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-swag | AdapterHub | 2021-11-24T16:32:26Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"roberta",
"en",
"dataset:swag",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
tags:
- roberta
- adapter-transformers
datasets:
- swag
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-swag` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [swag](https://huggingface.co/datasets/swag/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-swag", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas RΓΌcklΓ© and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/roberta-base-pf-social_i_qa | AdapterHub | 2021-11-24T16:32:05Z | 5 | 0 | adapter-transformers | [
"adapter-transformers",
"roberta",
"en",
"dataset:social_i_qa",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
tags:
- roberta
- adapter-transformers
datasets:
- social_i_qa
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-social_i_qa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [social_i_qa](https://huggingface.co/datasets/social_i_qa/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-social_i_qa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas RΓΌcklΓ© and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/roberta-base-pf-sick | AdapterHub | 2021-11-24T16:31:49Z | 10 | 1 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"roberta",
"adapterhub:nli/sick",
"en",
"dataset:sick",
"arxiv:2104.08247",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- text-classification
- roberta
- adapter-transformers
- adapterhub:nli/sick
- text-classification
datasets:
- sick
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-sick` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/sick](https://adapterhub.ml/explore/nli/sick/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-sick", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-scicite | AdapterHub | 2021-11-24T16:31:36Z | 4 | 0 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"roberta",
"en",
"dataset:scicite",
"arxiv:2104.08247",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- text-classification
- roberta
- adapter-transformers
datasets:
- scicite
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-scicite` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [scicite](https://huggingface.co/datasets/scicite/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-scicite", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-quartz | AdapterHub | 2021-11-24T16:31:03Z | 4 | 0 | adapter-transformers | [
"adapter-transformers",
"roberta",
"en",
"dataset:quartz",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
tags:
- roberta
- adapter-transformers
datasets:
- quartz
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-quartz` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [quartz](https://huggingface.co/datasets/quartz/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-quartz", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas RΓΌcklΓ© and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/roberta-base-pf-qqp | AdapterHub | 2021-11-24T16:30:48Z | 4 | 0 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"adapterhub:sts/qqp",
"roberta",
"en",
"arxiv:2104.08247",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- text-classification
- adapter-transformers
- adapterhub:sts/qqp
- roberta
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-qqp` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sts/qqp](https://adapterhub.ml/explore/sts/qqp/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-qqp", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-multirc | AdapterHub | 2021-11-24T16:30:32Z | 2 | 0 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"adapterhub:rc/multirc",
"roberta",
"en",
"arxiv:2104.08247",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- text-classification
- adapterhub:rc/multirc
- roberta
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-multirc` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [rc/multirc](https://adapterhub.ml/explore/rc/multirc/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-multirc", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-mrpc | AdapterHub | 2021-11-24T16:30:24Z | 2 | 0 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"roberta",
"adapterhub:sts/mrpc",
"en",
"arxiv:2104.08247",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- text-classification
- roberta
- adapterhub:sts/mrpc
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-mrpc` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sts/mrpc](https://adapterhub.ml/explore/sts/mrpc/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-mrpc", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-emo | AdapterHub | 2021-11-24T16:29:39Z | 3 | 0 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"roberta",
"en",
"dataset:emo",
"arxiv:2104.08247",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- text-classification
- roberta
- adapter-transformers
datasets:
- emo
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-emo` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [emo](https://huggingface.co/datasets/emo/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-emo", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-cosmos_qa | AdapterHub | 2021-11-24T16:29:17Z | 2 | 0 | adapter-transformers | [
"adapter-transformers",
"roberta",
"adapterhub:comsense/cosmosqa",
"en",
"dataset:cosmos_qa",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
tags:
- roberta
- adapterhub:comsense/cosmosqa
- adapter-transformers
datasets:
- cosmos_qa
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-cosmos_qa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/cosmosqa](https://adapterhub.ml/explore/comsense/cosmosqa/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-cosmos_qa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas RΓΌcklΓ© and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/roberta-base-pf-conll2003_pos | AdapterHub | 2021-11-24T16:29:03Z | 2 | 0 | adapter-transformers | [
"adapter-transformers",
"token-classification",
"roberta",
"adapterhub:pos/conll2003",
"en",
"dataset:conll2003",
"arxiv:2104.08247",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
tags:
- token-classification
- roberta
- adapterhub:pos/conll2003
- adapter-transformers
- token-classification
datasets:
- conll2003
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-conll2003_pos` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [pos/conll2003](https://adapterhub.ml/explore/pos/conll2003/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-conll2003_pos", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-conll2003 | AdapterHub | 2021-11-24T16:28:56Z | 7 | 1 | adapter-transformers | [
"adapter-transformers",
"token-classification",
"roberta",
"adapterhub:ner/conll2003",
"en",
"dataset:conll2003",
"arxiv:2104.08247",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
tags:
- token-classification
- roberta
- adapterhub:ner/conll2003
- adapter-transformers
datasets:
- conll2003
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-conll2003` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [ner/conll2003](https://adapterhub.ml/explore/ner/conll2003/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-conll2003", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-wnut_17 | AdapterHub | 2021-11-24T16:27:13Z | 3 | 0 | adapter-transformers | [
"adapter-transformers",
"token-classification",
"bert",
"en",
"dataset:wnut_17",
"arxiv:2104.08247",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
tags:
- token-classification
- bert
- adapter-transformers
datasets:
- wnut_17
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-wnut_17` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [wnut_17](https://huggingface.co/datasets/wnut_17/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-wnut_17", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-winogrande | AdapterHub | 2021-11-24T16:27:05Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"bert",
"adapterhub:comsense/winogrande",
"en",
"dataset:winogrande",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
tags:
- bert
- adapterhub:comsense/winogrande
- adapter-transformers
datasets:
- winogrande
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-winogrande` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/winogrande](https://adapterhub.ml/explore/comsense/winogrande/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-winogrande", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas RΓΌcklΓ© and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/bert-base-uncased-pf-ud_pos | AdapterHub | 2021-11-24T16:26:47Z | 11 | 0 | adapter-transformers | [
"adapter-transformers",
"token-classification",
"bert",
"adapterhub:pos/ud_ewt",
"en",
"dataset:universal_dependencies",
"arxiv:2104.08247",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
tags:
- token-classification
- bert
- adapterhub:pos/ud_ewt
- adapter-transformers
datasets:
- universal_dependencies
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-ud_pos` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [pos/ud_ewt](https://adapterhub.ml/explore/pos/ud_ewt/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-ud_pos", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-trec | AdapterHub | 2021-11-24T16:26:34Z | 6 | 0 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"bert",
"en",
"dataset:trec",
"arxiv:2104.08247",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- text-classification
- bert
- adapter-transformers
datasets:
- trec
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-trec` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [trec](https://huggingface.co/datasets/trec/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-trec", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-swag | AdapterHub | 2021-11-24T16:26:26Z | 1 | 0 | adapter-transformers | [
"adapter-transformers",
"bert",
"en",
"dataset:swag",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
tags:
- bert
- adapter-transformers
datasets:
- swag
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-swag` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [swag](https://huggingface.co/datasets/swag/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-swag", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas RΓΌcklΓ© and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/bert-base-uncased-pf-snli | AdapterHub | 2021-11-24T16:25:59Z | 2 | 0 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"bert",
"en",
"dataset:snli",
"arxiv:2104.08247",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- text-classification
- bert
- adapter-transformers
datasets:
- snli
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-snli` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [snli](https://huggingface.co/datasets/snli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-snli", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-sick | AdapterHub | 2021-11-24T16:25:52Z | 4 | 0 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"bert",
"adapterhub:nli/sick",
"en",
"dataset:sick",
"arxiv:2104.08247",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- text-classification
- adapter-transformers
- bert
- adapterhub:nli/sick
datasets:
- sick
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-sick` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/sick](https://adapterhub.ml/explore/nli/sick/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-sick", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-scitail | AdapterHub | 2021-11-24T16:25:46Z | 3 | 0 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"bert",
"adapterhub:nli/scitail",
"en",
"dataset:scitail",
"arxiv:2104.08247",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- text-classification
- bert
- adapterhub:nli/scitail
- adapter-transformers
datasets:
- scitail
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-scitail` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/scitail](https://adapterhub.ml/explore/nli/scitail/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-scitail", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-rotten_tomatoes | AdapterHub | 2021-11-24T16:25:29Z | 6 | 0 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"bert",
"adapterhub:sentiment/rotten_tomatoes",
"en",
"dataset:rotten_tomatoes",
"arxiv:2104.08247",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- text-classification
- bert
- adapterhub:sentiment/rotten_tomatoes
- adapter-transformers
datasets:
- rotten_tomatoes
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-rotten_tomatoes` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/rotten_tomatoes](https://adapterhub.ml/explore/sentiment/rotten_tomatoes/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-rotten_tomatoes", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-race | AdapterHub | 2021-11-24T16:25:17Z | 4 | 0 | adapter-transformers | [
"adapter-transformers",
"adapterhub:rc/race",
"bert",
"en",
"dataset:race",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
tags:
- adapterhub:rc/race
- bert
- adapter-transformers
datasets:
- race
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-race` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [rc/race](https://adapterhub.ml/explore/rc/race/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-race", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas RΓΌcklΓ© and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/bert-base-uncased-pf-quartz | AdapterHub | 2021-11-24T16:25:06Z | 4 | 0 | adapter-transformers | [
"adapter-transformers",
"bert",
"en",
"dataset:quartz",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
tags:
- bert
- adapter-transformers
datasets:
- quartz
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-quartz` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [quartz](https://huggingface.co/datasets/quartz/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-quartz", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas RΓΌcklΓ© and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/bert-base-uncased-pf-mrpc | AdapterHub | 2021-11-24T16:24:25Z | 3 | 0 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"bert",
"adapterhub:sts/mrpc",
"en",
"arxiv:2104.08247",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- text-classification
- bert
- adapterhub:sts/mrpc
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-mrpc` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sts/mrpc](https://adapterhub.ml/explore/sts/mrpc/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-mrpc", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-emo | AdapterHub | 2021-11-24T16:23:01Z | 4 | 0 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"bert",
"en",
"dataset:emo",
"arxiv:2104.08247",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- text-classification
- bert
- adapter-transformers
datasets:
- emo
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-emo` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [emo](https://huggingface.co/datasets/emo/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-emo", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-copa | AdapterHub | 2021-11-24T16:22:33Z | 5 | 0 | adapter-transformers | [
"adapter-transformers",
"bert",
"adapterhub:comsense/copa",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
tags:
- bert
- adapterhub:comsense/copa
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-copa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/copa](https://adapterhub.ml/explore/comsense/copa/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-copa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas RΓΌcklΓ© and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/bert-base-uncased-pf-conll2003_pos | AdapterHub | 2021-11-24T16:22:26Z | 12 | 0 | adapter-transformers | [
"adapter-transformers",
"token-classification",
"bert",
"adapterhub:pos/conll2003",
"en",
"dataset:conll2003",
"arxiv:2104.08247",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
tags:
- token-classification
- bert
- adapterhub:pos/conll2003
- adapter-transformers
datasets:
- conll2003
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-conll2003_pos` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [pos/conll2003](https://adapterhub.ml/explore/pos/conll2003/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-conll2003_pos", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-conll2000 | AdapterHub | 2021-11-24T16:22:12Z | 2 | 0 | adapter-transformers | [
"adapter-transformers",
"token-classification",
"bert",
"adapterhub:chunk/conll2000",
"en",
"dataset:conll2000",
"arxiv:2104.08247",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
tags:
- token-classification
- bert
- adapterhub:chunk/conll2000
- adapter-transformers
datasets:
- conll2000
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-conll2000` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [chunk/conll2000](https://adapterhub.ml/explore/chunk/conll2000/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-conll2000", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-cola | AdapterHub | 2021-11-24T16:21:53Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"bert",
"adapterhub:lingaccept/cola",
"en",
"arxiv:2104.08247",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- text-classification
- bert
- adapterhub:lingaccept/cola
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-cola` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [lingaccept/cola](https://adapterhub.ml/explore/lingaccept/cola/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-cola", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
Lowin/chinese-bigbird-small-1024 | Lowin | 2021-11-24T16:07:28Z | 8 | 3 | transformers | [
"transformers",
"pytorch",
"big_bird",
"feature-extraction",
"zh",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:04Z | ---
language:
- zh
license:
- apache-2.0
---
```python
import jieba_fast
from transformers import BertTokenizer
from transformers import BigBirdModel
class JiebaTokenizer(BertTokenizer):
def __init__(
self, pre_tokenizer=lambda x: jieba_fast.cut(x, HMM=False), *args, **kwargs
):
super().__init__(*args, **kwargs)
self.pre_tokenizer = pre_tokenizer
def _tokenize(self, text, *arg, **kwargs):
split_tokens = []
for text in self.pre_tokenizer(text):
if text in self.vocab:
split_tokens.append(text)
else:
split_tokens.extend(super()._tokenize(text))
return split_tokens
model = BigBirdModel.from_pretrained('Lowin/chinese-bigbird-small-1024')
tokenizer = JiebaTokenizer.from_pretrained('Lowin/chinese-bigbird-small-1024')
```
https://github.com/LowinLi/chinese-bigbird
|
Lowin/chinese-bigbird-wwm-base-4096 | Lowin | 2021-11-24T15:58:17Z | 10 | 3 | transformers | [
"transformers",
"pytorch",
"big_bird",
"fill-mask",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language:
- zh
license:
- apache-2.0
---
```python
from transformers import BertTokenizer
from transformers import BigBirdModel
model = BigBirdModel.from_pretrained('Lowin/chinese-bigbird-wwm-base-4096')
tokenizer = BertTokenizer.from_pretrained('Lowin/chinese-bigbird-wwm-base-4096')
```
https://github.com/LowinLi/chinese-bigbird |
EMBEDDIA/sloberta | EMBEDDIA | 2021-11-24T13:46:22Z | 2,987 | 5 | transformers | [
"transformers",
"pytorch",
"camembert",
"fill-mask",
"sl",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language:
- sl
license: cc-by-sa-4.0
---
# Usage
Load in transformers library with:
```
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("EMBEDDIA/sloberta")
model = AutoModelForMaskedLM.from_pretrained("EMBEDDIA/sloberta")
```
# SloBERTa
SloBERTa model is a monolingual Slovene BERT-like model. It is closely related to French Camembert model https://camembert-model.fr/. The corpora used for training the model have 3.47 billion tokens in total. The subword vocabulary contains 32,000 tokens. The scripts and programs used for data preparation and training the model are available on https://github.com/clarinsi/Slovene-BERT-Tool
SloBERTa was trained for 200,000 iterations or about 98 epochs.
## Corpora
The following corpora were used for training the model:
* Gigafida 2.0
* Kas 1.0
* Janes 1.0 (only Janes-news, Janes-forum, Janes-blog, Janes-wiki subcorpora)
* Slovenian parliamentary corpus siParl 2.0
* slWaC
|
huggingtweets/cupcakkesays | huggingtweets | 2021-11-24T12:56:57Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/cupcakkesays/1637758613095/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1061608813730635776/boCDIPDX_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">cupcakKe lyrics</div>
<div style="text-align: center; font-size: 14px;">@cupcakkesays</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from cupcakKe lyrics.
| Data | cupcakKe lyrics |
| --- | --- |
| Tweets downloaded | 3200 |
| Retweets | 0 |
| Short tweets | 44 |
| Tweets kept | 3156 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3beoi9ei/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cupcakkesays's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2kye6z0e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2kye6z0e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cupcakkesays')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
arnolfokam/bert-base-uncased-swa | arnolfokam | 2021-11-24T11:55:34Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"swa",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- swa
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
license: apache-2.0
widget:
- text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."
---
# Model description
**bert-base-uncased-swa** is a model based on the fine-tuned BERT base uncased model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Swahili corpus **(swa)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(swa)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**bert-base-uncased-swa**| 83.38 | 89.32 | 86.26
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/bert-base-uncased-swa")
model = AutoModelForTokenClassification.from_pretrained("bert-base-uncased-swa")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."
ner_results = nlp(example)
print(ner_results)
``` |
arnolfokam/roberta-base-kin | arnolfokam | 2021-11-24T11:46:30Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"NER",
"kin",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- kin
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
license: apache-2.0
widget:
- text: "Ambasaderi Bellomo yavuze ko bishimira ubufatanye burambye hagati ya EU nβu Rwanda, bushingiye nanone ku bufatanye hagati yβimigabane ya Afurika nβu Burayi."
---
# Model description
**roberta-base-kin** is a model based on the fine-tuned RoBERTa base model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Kinyarwanda corpus **(kin)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Kinyarwandan corpus **(kin)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**roberta-base-kin**| 76.26 | 80.58 |78.36
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/roberta-base-kin")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/roberta-base-kin")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Rayon Sports yasinyishije rutahizamu wβUmurundi"
ner_results = nlp(example)
print(ner_results)
``` |
arnolfokam/roberta-base-swa | arnolfokam | 2021-11-24T11:41:03Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"NER",
"swa",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- swa
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
license: apache-2.0
widget:
- text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."
---
# Model description
**roberta-base-swa** is a model based on the fine-tuned RoBERTa base model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Swahili corpus **(swa)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(swa)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**roberta-base-swa**| 80.58 | 86.79 | 83.57
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/roberta-base-swa")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/roberta-base-swa")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."
ner_results = nlp(example)
print(ner_results)
``` |
pkufool/wenet_speech_lm | pkufool | 2021-11-24T11:38:39Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | * Install requirements
```
pip install jieba
```
* Generate words.txt
```bash
data_dir=/path/to/wenetspeech
# the data_dir contains:
# tree -L 2 .
# .
# |-- TERMS_OF_ACCESS
# |-- WenetSpeech.json
# |-- audio
# |-- dev
# |-- test_meeting
# |-- test_net
# `-- train
grep "\"text\":" $data_dir/WenetSpeech.json | sed -e 's/["text: ]*//g' > text.txt
python -m jieba -d " " text.txt > tokenized.txt
cat tokenized.txt | awk '{for(i=1;i<=NF;i++)print $i}' | sort | uniq > words.txt
```
* Generate N-gram model
```
``` |
team-indain-image-caption/hindi-image-captioning | team-indain-image-caption | 2021-11-24T11:22:42Z | 23 | 1 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2022-03-02T23:29:05Z | # Hindi Image Captioning Model
This is an encoder-decoder image captioning model made with [VIT](https://huggingface.co/google/vit-base-patch16-224-in21k) encoder and [GPT2-Hindi](https://huggingface.co/surajp/gpt2-hindi) as a decoder. This is a first attempt at using ViT + GPT2-Hindi for image captioning task. We used the Flickr8k Hindi Dataset available on kaggle to train the model.
This model was trained using HuggingFace course community week, organized by Huggingface.
## How to use
Here is how to use this model to caption an image of the Flickr8k dataset:
```python
import torch
import requests
from PIL import Image
from transformers import ViTFeatureExtractor, AutoTokenizer, \
VisionEncoderDecoderModel
if torch.cuda.is_available():
device = 'cuda'
else:
device = 'cpu'
url = 'https://shorturl.at/fvxEQ'
image = Image.open(requests.get(url, stream=True).raw)
encoder_checkpoint = 'google/vit-base-patch16-224'
decoder_checkpoint = 'surajp/gpt2-hindi'
model_checkpoint = 'team-indain-image-caption/hindi-image-captioning'
feature_extractor = ViTFeatureExtractor.from_pretrained(encoder_checkpoint)
tokenizer = AutoTokenizer.from_pretrained(decoder_checkpoint)
model = VisionEncoderDecoderModel.from_pretrained(model_checkpoint).to(device)
#Inference
sample = feature_extractor(image, return_tensors="pt").pixel_values.to(device)
clean_text = lambda x: x.replace('<|endoftext|>','').split('\n')[0]
caption_ids = model.generate(sample, max_length = 50)[0]
caption_text = clean_text(tokenizer.decode(caption_ids))
print(caption_text)
```
## Training data
We used the Flickr8k Hindi Dataset, which is the translated version of the original Flickr8k Dataset, available on Kaggle to train the model.
## Training procedure
This model was trained during HuggingFace course community week, organized by Huggingface. The training was done on Kaggle GPU.
## Training Parameters
- epochs = 8,
- batch_size = 8,
- Mixed Precision Enabled
## Team Members
- [Sean Benhur](https://www.linkedin.com/in/seanbenhur/)
- [Herumb Shandilya](https://www.linkedin.com/in/herumb-s-740163131/) |
arnolfokam/bert-base-uncased-kin | arnolfokam | 2021-11-24T11:07:08Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"kin",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- kin
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
license: apache-2.0
widget:
- text: "Ambasaderi Bellomo yavuze ko bishimira ubufatanye burambye hagati ya EU nβu Rwanda, bushingiye nanone ku bufatanye hagati yβimigabane ya Afurika nβu Burayi."
---
# Model description
**bert-base-uncased-kin** is a model based on the fine-tuned BERT base uncased model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Kinyarwanda corpus **(kin)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Kinyarwandan corpus **(kin)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**bert-base-uncased-kin**| 75.00 |80.09|77.47
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/bert-base-uncased-kin")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/bert-base-uncased-kin")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Rayon Sports yasinyishije rutahizamu wβUmurundi"
ner_results = nlp(example)
print(ner_results)
``` |
Peterard/distilbert_bug_classifier | Peterard | 2021-11-24T04:01:55Z | 4 | 2 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language:
- en
tags:
- text-classification
widget:
- text: "The app crashed when I opened it this morning. Can you fix this please?"
example_title: "Likely bug report"
- text: "Please add a like button!"
example_title: "Unlikely bug report"
---
How to use this classifier:
```
from transformers import pipeline
pipe = pipeline("text-classification", model="Peterard/distilbert_bug_classifier")
pipe("The app crashed when I opened it this morning. Can you fix this please?")
# [{'label': 'bug', 'score': 0.9042391180992126}]
pipe("Please add a like button!")
# [{'label': 'no_bug', 'score': 0.9977496266365051}]
```
N.B. The label will change depending on which is the likelier class |
Peterard/distilbert_feature_classifier | Peterard | 2021-11-24T03:59:16Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language:
- en
tags:
- text-classification
widget:
- text: "Please add a like button!"
example_title: "Likely feature request"
- text: "The app crashed when I opened it this morning. Can you fix this please?"
example_title: "Unlikely feature request"
---
How to use this classifier:
```
from transformers import pipeline
pipe = pipeline("text-classification", model="Peterard/distilbert_feature_classifier")
pipe("Please add a like button!")
# [{'label': 'feature_request', 'score': 0.8930749893188477}]
pipe("The app crashed when I opened it this morning. Can you fix this please?")
#[{'label': 'no_feature_request', 'score': 0.9971746206283569}]
```
N.B. The label will change depending on which is the likelier class |
jb2k/bert-base-multilingual-cased-language-detection | jb2k | 2021-11-24T01:36:01Z | 4,142 | 14 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | # bert-base-multilingual-cased-language-detection
A model for language detection with support for 45 languages
## Model description
This model was created by fine-tuning
[bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the [common language](https://huggingface.co/datasets/common_language) dataset.
This dataset has support for 45 languages, which are listed below:
```
Arabic, Basque, Breton, Catalan, Chinese_China, Chinese_Hongkong, Chinese_Taiwan, Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, French, Frisian, Georgian, German, Greek, Hakha_Chin, Indonesian, Interlingua, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Maltese, Mongolian, Persian, Polish, Portuguese, Romanian, Romansh_Sursilvan, Russian, Sakha, Slovenian, Spanish, Swedish, Tamil, Tatar, Turkish, Ukranian, Welsh
```
## Evaluation
This model was evaluated on the test split of the [common language](https://huggingface.co/datasets/common_language) dataset, and achieved the following metrics:
* Accuracy: 97.8%
|
DeepPavlov/rubert-base-cased | DeepPavlov | 2021-11-23T08:03:04Z | 205,575 | 95 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"ru",
"arxiv:1905.07213",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:04Z | ---
language:
- ru
---
# rubert-base-cased
RuBERT \(Russian, cased, 12βlayer, 768βhidden, 12βheads, 180M parameters\) was trained on the Russian part of Wikipedia and news data. We used this training data to build a vocabulary of Russian subtokens and took a multilingual version of BERTβbase as an initialization for RuBERT\[1\].
08.11.2021: upload model with MLM and NSP heads
\[1\]: Kuratov, Y., Arkhipov, M. \(2019\). Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language. arXiv preprint [arXiv:1905.07213](https://arxiv.org/abs/1905.07213).
|
Maltehb/aelaectra-danish-electra-small-uncased | Maltehb | 2021-11-23T06:39:20Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"pretraining",
"ælæctra",
"danish",
"ELECTRA-Small",
"replaced token detection",
"da",
"dataset:DAGW",
"arxiv:2003.10555",
"arxiv:1810.04805",
"arxiv:2005.03521",
"license:mit",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
language: "da"
co2_eq_emissions: 4009.5
tags:
- ælæctra
- pytorch
- danish
- ELECTRA-Small
- replaced token detection
license: "mit"
datasets:
- DAGW
metrics:
- f1
---
# ΓlΓ¦ctra - A Step Towards More Efficient Danish Natural Language Processing
**ΓlΓ¦ctra** is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models. Initially a cased and an uncased model are released. It was created as part of a Cognitive Science bachelor's thesis.
ΓlΓ¦ctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (StrΓΈmberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of ΓlΓ¦ctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings!
ΓlΓ¦ctra was, as mentioned, created to enhance the Danish NLP capabilties and please do note how this GitHub still does not support the Danish characters "*Γ, Γ and Γ
*" as the title of this repository becomes "*-l-ctra*". How ironic.π
Here is an example on how to load both the cased and the uncased ΓlΓ¦ctra model in [PyTorch](https://pytorch.org/) using the [π€Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForPreTraining
tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-cased")
model = AutoModelForPreTraining.from_pretrained("Maltehb/-l-ctra-cased")
```
```python
from transformers import AutoTokenizer, AutoModelForPreTraining
tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-uncased")
model = AutoModelForPreTraining.from_pretrained("Maltehb/-l-ctra-uncased")
```
### Evaluation of current Danish Language Models
ΓlΓ¦ctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:
| Model | Layers | Hidden Size | Params | AVG NER micro-f1 (DaNE-testset) | Average Inference Time (Sec/Epoch) | Download |
| --- | --- | --- | --- | --- | --- | --- |
| ΓlΓ¦ctra Uncased | 12 | 256 | 13.7M | 78.03 (SD = 1.28) | 10.91 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| ΓlΓ¦ctra Cased | 12 | 256 | 14.7M | 80.08 (SD = 0.26) | 10.92 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| DaBERT | 12 | 768 | 110M | 84.89 (SD = 0.64) | 43.03 | [Link for model](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1) |
| mBERT Uncased | 12 | 768 | 167M | 80.44 (SD = 0.82) | 72.10 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip) |
| mBERT Cased | 12 | 768 | 177M | 83.79 (SD = 0.91) | 70.56 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) |
On [DaNE](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020), ΓlΓ¦ctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, ΓlΓ¦ctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate. For a full description of the evaluation and specification of the model read the thesis: 'ΓlΓ¦ctra - A Step Towards More Efficient Danish Natural Language Processing'.
### Pretraining
To pretrain ΓlΓ¦ctra it is recommended to build a Docker Container from the [Dockerfile](https://github.com/MalteHB/ΓlΓ¦ctra/tree/master/notebooks/fine-tuning/). Next, simply follow the [pretraining notebooks](https://github.com/MalteHB/ΓlΓ¦ctra/tree/master/infrastructure/Dockerfile/)
The pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company [KMD](https://www.kmd.dk/). The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model
### Fine-tuning
To fine-tune any ΓlΓ¦ctra model follow the [fine-tuning notebooks](https://github.com/MalteHB/ΓlΓ¦ctra/tree/master/notebooks/fine-tuning/)
### References
Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. http://arxiv.org/abs/2003.10555
Danish BERT. (2020). BotXO. https://github.com/botxo/nordic_bert (Original work published 2019)
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805
Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & SΓΈgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597β4604. https://www.aclweb.org/anthology/2020.lrec-1.565
StrΓΈmberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Γ
., Petersen, M. L., RystrΓΈm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. http://arxiv.org/abs/2005.03521
#### Acknowledgements
As the majority of this repository is build upon [the works](https://github.com/google-research/electra) by the team at Google who created ELECTRA, a HUGE thanks to them is in order.
A Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (StrΓΈmberg-Derczynski et al., 2020).
Furthermore, I would like to thank my supervisor [Riccardo Fusaroli](https://github.com/fusaroli) for the support with the thesis, and a special thanks goes out to [Kenneth Enevoldsen](https://github.com/KennethEnevoldsen) for his continuous feedback.
Lastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!
#### Contact
For help or further information feel free to connect with the author Malte HΓΈjmark-Bertelsen on [[email protected]](mailto:[email protected]?subject=[GitHub]%20ΓlΓ¦ctra) or any of the following platforms:
[<img align="left" alt="MalteHB | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][twitter]
[<img align="left" alt="MalteHB | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/linkedin.svg" />][linkedin]
[<img align="left" alt="MalteHB | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram]
<br />
</details>
[twitter]: https://twitter.com/malteH_B
[instagram]: https://www.instagram.com/maltemusen/
[linkedin]: https://www.linkedin.com/in/malte-h%C3%B8jmark-bertelsen-9a618017b/ |
huggingtweets/nickadamsinusa | huggingtweets | 2021-11-23T01:45:51Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/nickadamsinusa/1637631945685/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1270107259230707717/__afoYsM_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Nick Adams</div>
<div style="text-align: center; font-size: 14px;">@nickadamsinusa</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Nick Adams.
| Data | Nick Adams |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 394 |
| Short tweets | 106 |
| Tweets kept | 2748 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2wacq1mt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nickadamsinusa's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2r0vox67) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2r0vox67/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nickadamsinusa')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/dril-horse_ebooks-pukicho | huggingtweets | 2021-11-22T22:54:49Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/dril-horse_ebooks-pukicho/1637621684272/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/866045441942487041/xRAnnstd_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1096005346/1_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & Pukicho & Horse ebooks</div>
<div style="text-align: center; font-size: 14px;">@dril-horse_ebooks-pukicho</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & Pukicho & Horse ebooks.
| Data | wint | Pukicho | Horse ebooks |
| --- | --- | --- | --- |
| Tweets downloaded | 3226 | 2989 | 3200 |
| Retweets | 466 | 90 | 0 |
| Short tweets | 308 | 292 | 421 |
| Tweets kept | 2452 | 2607 | 2779 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/29iqmln0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-horse_ebooks-pukicho's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/29cfj39j) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/29cfj39j/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dril-horse_ebooks-pukicho')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
gayanin/bart-mlm-pubmed-35 | gayanin | 2021-11-22T21:16:10Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-mlm-pubmed-35
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-mlm-pubmed-35
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9359
- Rouge2 Precision: 0.5451
- Rouge2 Recall: 0.4232
- Rouge2 Fmeasure: 0.4666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 1.4156 | 1.0 | 663 | 1.0366 | 0.5165 | 0.3967 | 0.4394 |
| 1.1773 | 2.0 | 1326 | 0.9841 | 0.5354 | 0.4168 | 0.4589 |
| 1.0894 | 3.0 | 1989 | 0.9554 | 0.5346 | 0.4133 | 0.4563 |
| 0.9359 | 4.0 | 2652 | 0.9440 | 0.5357 | 0.4163 | 0.4587 |
| 0.8758 | 5.0 | 3315 | 0.9340 | 0.5428 | 0.4226 | 0.465 |
| 0.8549 | 6.0 | 3978 | 0.9337 | 0.5385 | 0.422 | 0.4634 |
| 0.7743 | 7.0 | 4641 | 0.9330 | 0.542 | 0.422 | 0.4647 |
| 0.7465 | 8.0 | 5304 | 0.9315 | 0.5428 | 0.4231 | 0.4654 |
| 0.7348 | 9.0 | 5967 | 0.9344 | 0.5462 | 0.4244 | 0.4674 |
| 0.7062 | 10.0 | 6630 | 0.9359 | 0.5451 | 0.4232 | 0.4666 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
samantharhay/wav2vec2-base-libir-zenodo | samantharhay | 2021-11-22T19:29:29Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: wav2vec2-base-libir-zenodo
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-libir-zenodo
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4238
- Wer: 0.4336
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.053 | 1.0 | 31 | 3.1494 | 0.7345 |
| 2.9742 | 2.0 | 62 | 3.0527 | 0.7257 |
| 2.9139 | 3.0 | 93 | 2.8808 | 0.7257 |
| 2.6586 | 4.0 | 124 | 2.6648 | 0.6726 |
| 2.7117 | 5.0 | 155 | 2.4695 | 0.6372 |
| 2.5173 | 6.0 | 186 | 2.3087 | 0.6195 |
| 2.3665 | 7.0 | 217 | 2.2745 | 0.6018 |
| 2.1276 | 8.0 | 248 | 2.2180 | 0.5752 |
| 2.1624 | 9.0 | 279 | 2.1311 | 0.5752 |
| 2.0312 | 10.0 | 310 | 2.0358 | 0.5575 |
| 2.0652 | 11.0 | 341 | 1.9146 | 0.5310 |
| 1.7963 | 12.0 | 372 | 1.8346 | 0.5221 |
| 1.6811 | 13.0 | 403 | 1.8351 | 0.5398 |
| 1.5929 | 14.0 | 434 | 1.8256 | 0.4779 |
| 1.6644 | 15.0 | 465 | 1.7572 | 0.4779 |
| 1.5411 | 16.0 | 496 | 1.8740 | 0.4779 |
| 1.4027 | 17.0 | 527 | 1.5143 | 0.4779 |
| 1.2634 | 18.0 | 558 | 1.3864 | 0.4867 |
| 1.1053 | 19.0 | 589 | 1.3192 | 0.4425 |
| 1.0517 | 20.0 | 620 | 1.4705 | 0.4602 |
| 1.1033 | 21.0 | 651 | 1.6006 | 0.4956 |
| 0.9992 | 22.0 | 682 | 1.4748 | 0.5044 |
| 0.8987 | 23.0 | 713 | 1.3544 | 0.4867 |
| 0.9656 | 24.0 | 744 | 1.2673 | 0.4336 |
| 0.952 | 25.0 | 775 | 1.3955 | 0.4071 |
| 0.8507 | 26.0 | 806 | 1.3520 | 0.4425 |
| 0.8269 | 27.0 | 837 | 1.8992 | 0.4336 |
| 0.7255 | 28.0 | 868 | 1.9850 | 0.4425 |
| 0.8269 | 29.0 | 899 | 3.0089 | 0.4425 |
| 0.6178 | 30.0 | 930 | 1.4238 | 0.4336 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
JorisCos/VAD_Net | JorisCos | 2021-11-22T17:17:23Z | 7 | 0 | asteroid | [
"asteroid",
"pytorch",
"audio",
"VADNet",
"VAD",
"Voice Activity Detection",
"dataset:LibriVAD",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
tags:
- asteroid
- audio
- VADNet
- VAD
- Voice Activity Detection
datasets:
- LibriVAD
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/VAD_Net`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
segment: 3
train_dir: /home/jcosentino/VAD_dataset/metadata/sets/train.json
valid_dir: /home/jcosentino/VAD_dataset/metadata/sets/dev.json
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
main_args:
exp_dir: exp/full_not_causal_f1/
help: null
masknet:
bn_chan: 128
causal: false
hid_chan: 512
mask_act: relu
n_blocks: 3
n_repeats: 5
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
positional arguments: {}
training:
batch_size: 8
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On LibriVAD min test set :
```yml
accuracy: 0.8196149023502931,
precision: 0.8305009048356607,
recall: 0.8869202491310206,
f1_score: 0.8426184545700124
```
License notice:
This work "VAD_Net" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The [DNS challenge](https://github.com/microsoft/DNS-Challenge) noises, [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/).
"VAD_Net" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
renBaikau/alphaDelay | renBaikau | 2021-11-22T12:21:47Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: alphaDelay
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# alphaDelay
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6648
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 82.3335 | 5.0 | 25 | 14.0648 | 1.0 |
| 6.1049 | 10.0 | 50 | 3.7145 | 1.0 |
| 3.9873 | 15.0 | 75 | 3.6648 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
malteos/aspect-cord19-scibert-scivocab-uncased | malteos | 2021-11-22T10:13:31Z | 7 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"classification",
"similarity",
"sci",
"en",
"dataset:cord19",
"arxiv:2010.06395",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language:
- sci
- en
tags:
- classification
- similarity
license: mit
datasets:
- cord19
---
# Aspect-based Document Similarity for Research Papers
A `scibert-scivocab-uncased` model fine-tuned on the CORD-19 corpus as in [Aspect-based Document Similarity for Research Papers](https://arxiv.org/abs/2010.06395).
<img src="https://raw.githubusercontent.com/malteos/aspect-document-similarity/master/docrel.png">
See GitHub for more details: https://github.com/malteos/aspect-document-similarity
## Demo
<a href="https://colab.research.google.com/github/malteos/aspect-document-similarity/blob/master/demo.ipynb"><img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Google Colab"></a>
You can try our trained models directly on Google Colab on all papers available on Semantic Scholar (via DOI, ArXiv ID, ACL ID, PubMed ID):
<a href="https://colab.research.google.com/github/malteos/aspect-document-similarity/blob/master/demo.ipynb"><img src="https://raw.githubusercontent.com/malteos/aspect-document-similarity/master/demo.gif" alt="Click here for demo"></a>
|
kssteven/ibert-roberta-base | kssteven | 2021-11-22T10:09:32Z | 2,805 | 1 | transformers | [
"transformers",
"pytorch",
"ibert",
"fill-mask",
"arxiv:1907.11692",
"arxiv:2101.01321",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | # I-BERT base model
This model, `ibert-roberta-base`, is an integer-only quantized version of [RoBERTa](https://arxiv.org/abs/1907.11692), and was introduced in [this paper](https://arxiv.org/abs/2101.01321).
I-BERT stores all parameters with INT8 representation, and carries out the entire inference using integer-only arithmetic.
In particular, I-BERT replaces all floating point operations in the Transformer architectures (e.g., MatMul, GELU, Softmax, and LayerNorm) with closely approximating integer operations.
This can result in upto 4x inference speed up as compared to floating point counterpart when tested on an Nvidia T4 GPU.
The best model parameters searched via quantization-aware finetuning can be then exported (e.g., to TensorRT) for integer-only deployment of the model.
## Finetuning Procedure
Finetuning of I-BERT consists of 3 stages: (1) Full-precision finetuning from the pretrained model on a down-stream task, (2) model quantization, and (3) integer-only finetuning (i.e., quantization-aware training) of the quantized model.
### Full-precision finetuning
Full-precision finetuning of I-BERT is similar to RoBERTa finetuning.
For instance, you can run the following command to finetune on the [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) text classification task.
```
python examples/text-classification/run_glue.py \
--model_name_or_path kssteven/ibert-roberta-base \
--task_name MRPC \
--do_eval \
--do_train \
--evaluation_strategy epoch \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--save_steps 115 \
--learning_rate 2e-5 \
--num_train_epochs 10 \
--output_dir $OUTPUT_DIR
```
### Model Quantization
Once you are done with full-precision finetuning, open up `config.json` in your checkpoint directory and set the `quantize` attribute as `true`.
```
{
"_name_or_path": "kssteven/ibert-roberta-base",
"architectures": [
"IBertForSequenceClassification"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"finetuning_task": "mrpc",
"force_dequant": "none",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "ibert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"quant_mode": true,
"tokenizer_class": "RobertaTokenizer",
"transformers_version": "4.4.0.dev0",
"type_vocab_size": 1,
"vocab_size": 50265
}
```
Then, your model will automatically run as the integer-only mode when you load the checkpoint.
Also, make sure to delete `optimizer.pt`, `scheduler.pt` and `trainer_state.json` in the same directory.
Otherwise, HF will not reset the optimizer, scheduler, or trainer state for the following integer-only finetuning.
### Integer-only finetuning (Quantization-aware training)
Finally, you will be able to run integer-only finetuning simply by loading the checkpoint file you modified.
Note that the only difference in the example command below is `model_name_or_path`.
```
python examples/text-classification/run_glue.py \
--model_name_or_path $CHECKPOINT_DIR
--task_name MRPC \
--do_eval \
--do_train \
--evaluation_strategy epoch \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--save_steps 115 \
--learning_rate 1e-6 \
--num_train_epochs 10 \
--output_dir $OUTPUT_DIR
```
## Citation info
If you use I-BERT, please cite [our papaer](https://arxiv.org/abs/2101.01321).
```
@article{kim2021bert,
title={I-BERT: Integer-only BERT Quantization},
author={Kim, Sehoon and Gholami, Amir and Yao, Zhewei and Mahoney, Michael W and Keutzer, Kurt},
journal={arXiv preprint arXiv:2101.01321},
year={2021}
}
```
|
ThomasSimonini/mlagents-snowballfight-1vs1-ppo | ThomasSimonini | 2021-11-22T09:54:35Z | 0 | 0 | null | [
"deep-reinforcement-learning",
"reinforcement-learning",
"mlagents",
"license:apache-2.0",
"region:us"
] | reinforcement-learning | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- deep-reinforcement-learning
- reinforcement-learning
- mlagents
environment:
- MLAgents: Snowballfight-1vs1-ppo
model-index:
- name: mlagents-snowballfight-1vs1-ppo
---
# mlagents-snowballfight-1vs1-ppo βοΈ
This is a saved model of a PPO 1vs1 agent playing Snowball Fight.
|
huggingtweets/ctrlcreep | huggingtweets | 2021-11-22T09:35:47Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/ctrlcreep/1637573720314/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/855460243152801793/cxX82P3V_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">infineot</div>
<div style="text-align: center; font-size: 14px;">@ctrlcreep</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from infineot.
| Data | infineot |
| --- | --- |
| Tweets downloaded | 3241 |
| Retweets | 171 |
| Short tweets | 51 |
| Tweets kept | 3019 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/26459hr9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ctrlcreep's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1prcdcpn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1prcdcpn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ctrlcreep')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
khalidalt/DeBERTa-v3-large-mnli | khalidalt | 2021-11-22T08:38:23Z | 54 | 5 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"zero-shot-classification",
"en",
"arxiv:2006.03654",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- text-classification
- zero-shot-classification
metrics:
- accuracy
widget:
- text: "The Movie have been criticized for the story. However, I think it is a great movie. [SEP] I liked the movie."
---
# DeBERTa-v3-large-mnli
## Model description
This model was trained on the Multi-Genre Natural Language Inference ( MultiNLI ) dataset, which consists of 433k sentence pairs textual entailment information.
The model used is [DeBERTa-v3-large from Microsoft](https://huggingface.co/microsoft/deberta-large). The v3 DeBERTa outperforms the result of Bert and RoBERTa in majority of NLU benchmarks by using disentangled attention and enhanced mask decoder. More information about the orginal model is on [official repository](https://github.com/microsoft/DeBERTa) and the [paper](https://arxiv.org/abs/2006.03654)
## Intended uses & limitations
#### How to use the model
```python
premise = "The Movie have been criticized for the story. However, I think it is a great movie."
hypothesis = "I liked the movie."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1)
label_names = ["entailment", "neutral", "contradiction"]
print(label_names[prediction.argmax(0).tolist()])
```
### Training data
This model was trained on the MultiNLI dataset, which consists of 392K sentence textual entitlement.
### Training procedure
DeBERTa-v3-large-mnli was trained using the Hugging Face trainer with the following hyperparameters.
```
train_args = TrainingArguments(
learning_rate=2e-5,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
num_train_epochs=3,
warmup_ratio=0.06,
weight_decay=0.1,
fp16=True,
seed=42,
)
```
### BibTeX entry and citation info
Please cite the [DeBERTa paper](https://arxiv.org/abs/2006.03654) and [MultiNLI Dataset](https://cims.nyu.edu/~sbowman/multinli/paper.pdf) if you use this model and include this Huggingface hub. |
wukevin/tcr-bert-mlm-only | wukevin | 2021-11-22T08:32:41Z | 4,032 | 4 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | Pretrained on:
* Masked amino acid modeling
Please see our [main model](https://huggingface.co/wukevin/tcr-bert) for additional details. |
snunlp/KR-Medium | snunlp | 2021-11-22T06:19:42Z | 173 | 7 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"ko",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language:
- ko
---
# KR-BERT-MEDIUM
A pretrained Korean-specific BERT model developed by Computational Linguistics Lab at Seoul National University.
It is based on our character-level [KR-BERT](https://github.com/snunlp/KR-BERT) model which utilize WordPiece tokenizer.
Here, the model name has a suffix 'MEDIUM' since its training data grew from KR-BERT's original dataset. We have another additional model, KR-BERT-EXPANDED with more extensive training data expanded from those of KR-BERT-MEDIUM, so the suffix 'MEDIUM' is used.
<br>
### Vocab, Parameters and Data
| | Mulitlingual BERT<br>(Google) | KorBERT<br>(ETRI) | KoBERT<br>(SKT) | KR-BERT character | KR-BERT-MEDIUM |
| -------------: | ---------------------------------------------: | ---------------------: | ----------------------------------: | -------------------------------------: | -------------------------------------: |
| vocab size | 119,547 | 30,797 | 8,002 | 16,424 | 20,000 |
| parameter size | 167,356,416 | 109,973,391 | 92,186,880 | 99,265,066 | 102,015,010 |
| data size | -<br>(The Wikipedia data<br>for 104 languages) | 23GB<br>4.7B morphemes | -<br>(25M sentences,<br>233M words) | 2.47GB<br>20M sentences,<br>233M words | 12.37GB<br>91M sentences,<br>1.17B words |
<br>
The training data for this model is expanded from those of KR-BERT, texts from Korean Wikipedia, and news articles, by addition of legal texts crawled from the National Law Information Center and [Korean Comments dataset](https://www.kaggle.com/junbumlee/kcbert-pretraining-corpus-korean-news-comments). This data expansion is to collect texts from more various domains than those of KR-BERT. The total data size is about 12.37GB, consisting of 91M and 1.17B words.
The user-generated comment dataset is expected to have similar stylistic properties to the task datasets of NSMC and HSD. Such text includes abbreviations, coinages, emoticons, spacing errors, and typos. Therefore, we added the dataset containing such on-line properties to our existing formal data such as news articles and Wikipedia texts to compose the training data for KR-BERT-MEDIUM. Accordingly, KR-BERT-MEDIUM reported better results in sentiment analysis than other models, and the performances improved with the model of the more massive, more various training data.
This modelβs vocabulary size is 20,000, whose tokens are trained based on the expanded training data using the WordPiece tokenizer.
KR-BERT-MEDIUM is trained for 2M steps with the maxlen of 128, training batch size of 64, and learning rate of 1e-4, taking 22 hours to train the model using a Google Cloud TPU v3-8.
### Models
#### TensorFlow
* BERT tokenizer, character-based model ([download](https://drive.google.com/file/d/1OWXGqr2Z2PWD6ST3MsFmcjM8c2mr8PkE/view?usp=sharing))
#### PyTorch
* You can import it from Transformers!
```sh
# pytorch, transformers
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("snunlp/KR-Medium", do_lower_case=False)
model = AutoModel.from_pretrained("snunlp/KR-Medium")
```
### Requirements
- transformers == 4.0.0
- tensorflow < 2.0
## Downstream tasks
* Movie Review Classification on Naver Sentiment Movie Corpus [(NSMC)](https://github.com/e9t/nsmc)
* Hate Speech Detection [(Moon et al., 2020)](https://github.com/kocohub/korean-hate-speech)
#### tensorflow
* After downloading our pre-trained models, put them in a `models` directory.
* Set the output directory (for fine-tuning)
* Select task name: `NSMC` for Movie Review Classification, and `HATE` for Hate Speech Detection
```sh
# tensorflow
python3 run_classifier.py \
--task_name={NSMC, HATE} \
--do_train=true \
--do_eval=true \
--do_predict=true \
--do_lower_case=False\
--max_seq_length=128 \
--train_batch_size=128 \
--learning_rate=5e-05 \
--num_train_epochs=5.0 \
--output_dir={output_dir}
```
<br>
### Performances
TensorFlow, test set performances
| | multilingual BERT | KorBERT<br>character | KR-BERT<br>character<br>WordPiece | KR-BERT-MEDIUM |
|:-----:|-------------------:|----------------:|----------------------------:|-----------------------------------------:|
| NSMC (Acc) | 86.82 | 89.81 | 89.74 | 90.29 |
| Hate Speech (F1) | 52.03 | 54.33 | 54.53 | 57.91 |
<br>
## Contacts
[email protected]
|
kaggleodin/distilbert-base-uncased-finetuned-squad | kaggleodin | 2021-11-22T04:08:36Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2291 | 1.0 | 5533 | 1.1581 |
| 0.9553 | 2.0 | 11066 | 1.1249 |
| 0.7767 | 3.0 | 16599 | 1.1639 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
yuekai/espnet-slu-snips | yuekai | 2021-11-22T02:04:42Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | Fine-tune snips dataset for SLU task using pretrained ASR model with hubert feature
---
language:
- en
receipe: "https://github.com/espnet/espnet/tree/master/egs2/snips/asr1"
datasets:
- snips: smart-lights-en-close-field
metrics:
- F1 score: 91.7
--- |
Ulto/pythonCoPilot3 | Ulto | 2021-11-22T01:24:16Z | 6 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
model-index:
- name: pythonCoPilot3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythonCoPilot3
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
teven/roberta_kelm_tekgen | teven | 2021-11-22T01:04:55Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# teven/roberta_kelm_tekgen
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('teven/roberta_kelm_tekgen')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('teven/roberta_kelm_tekgen')
model = AutoModel.from_pretrained('teven/roberta_kelm_tekgen')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=teven/roberta_kelm_tekgen)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 976035 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 394379 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
[
{
"epochs": 1,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
]
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Ulto/pythonCoPilot2 | Ulto | 2021-11-22T00:24:53Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
model-index:
- name: pythonCoPilot2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythonCoPilot2
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 427 | 4.3782 |
| 4.6698 | 2.0 | 854 | 4.0718 |
| 3.3953 | 3.0 | 1281 | 4.0479 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Ulto/pythonCoPilot | Ulto | 2021-11-21T23:49:37Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
model-index:
- name: pythonCoPilot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythonCoPilot
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
rafanegrette/t5_spa_gua | rafanegrette | 2021-11-21T17:53:33Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ## Translator of Spanish/Wayuunaiki with T5 model ##
This is a finetuned model based on T5 using a corpus of spanish-wayuunaiki.
Wayuunaiki is the native language of the Wayuus, the major indigenous people in the north of Colombia.
|
huggingtweets/prathkum | huggingtweets | 2021-11-21T09:58:13Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/prathkum/1637488688526/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1418652395119153153/dvMUbHmM_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pratham</div>
<div style="text-align: center; font-size: 14px;">@prathkum</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pratham.
| Data | Pratham |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 455 |
| Short tweets | 318 |
| Tweets kept | 2473 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2lnm0sab/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @prathkum's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2w7zt05t) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2w7zt05t/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/prathkum')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
emeraldgoose/bert-base-v1-sports | emeraldgoose | 2021-11-21T05:45:05Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: ko
mask_token: "[MASK]"
widget:
- text: μ°μ
μμ κ±° κ²½κΈ°λ μλμ μΌλ‘ μλ‘μ΄ [MASK] 1990λ
λμ νμ±ν λμλ€.
---
## Data-annotation-nlp-10 (BoostCamp AI)
μν€νΌλμ(μ€ν¬μΈ ) dataset ꡬμΆμ μ§ννλ©΄μ μ»μ λ¬Έμ₯μ ν΅ν΄ bert μ¬μ νμ΅μ μ§ν
## How to use
```python
from transformers import AutoTokenizer, BertForMaskedLM
model = BertForMaskedLM.from_pretrained("emeraldgoose/bert-base-v1-sports")
tokenizer = AutoTokenizer.from_pretrained("emeraldgoose/bert-base-v1-sports")
text = "μ°μ
μμ κ±° κ²½κΈ°λ μλμ μΌλ‘ μλ‘μ΄ [MASK] 1990λ
λμ νμ±ν λμλ€."
inputs = tokenizer.encode(text, return_tensors='pt')
model.eval()
outputs = model(inputs)['logits']
predict = outputs.argmax(-1)[0]
print(tokenizer.decode(predict))
``` |
Leisa/marian-finetuned-kde4-en-to-fr | Leisa | 2021-11-21T05:25:45Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.94538305859332
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8558
- Bleu: 52.9454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0
- Datasets 1.15.1
- Tokenizers 0.10.3
|
xiongjie/lightweight-real-ESRGAN-anime | xiongjie | 2021-11-21T04:36:38Z | 0 | 1 | null | [
"onnx",
"region:us"
] | null | 2022-03-02T23:29:05Z | This is super resolution model for anime like illustration that can upscale image 4x.
This model can upscale 256x256 image to 1024x1024 within around 30[ms] on GPU and around 300[ms] on CPU.
Example is [here](https://github.com/xiong-jie-y/ml-examples/tree/master/lightweight_real_esrgan_anime).
License: MIT License |
arvalinno/distilbert-base-uncased-finetuned-indosquad-v2 | arvalinno | 2021-11-21T04:15:31Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-indosquad-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-indosquad-v2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6650
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.9015 | 1.0 | 9676 | 1.5706 |
| 1.6438 | 2.0 | 19352 | 1.5926 |
| 1.4714 | 3.0 | 29028 | 1.5253 |
| 1.3486 | 4.0 | 38704 | 1.6650 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Ulto/avengers2 | Ulto | 2021-11-21T01:13:26Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: avengers2
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# avengers2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0131
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 56 | 3.9588 |
| No log | 2.0 | 112 | 3.9996 |
| No log | 3.0 | 168 | 4.0131 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0
- Datasets 1.2.1
- Tokenizers 0.10.1
|
mgreenbe/bertlet-base-uncased-for-sequence-classification | mgreenbe | 2021-11-20T17:23:02Z | 4 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
model-index:
- name: bertlet-base-uncased-for-sequence-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertlet-base-uncased-for-sequence-classification
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
pixyz/distilbert-base-uncased-finetuned-squad | pixyz | 2021-11-20T14:49:58Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2203 | 1.0 | 5533 | 1.1569 |
| 0.9452 | 2.0 | 11066 | 1.1234 |
| 0.7656 | 3.0 | 16599 | 1.1586 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
jpabbuehl/sagemaker-distilbert-emotion | jpabbuehl | 2021-11-20T14:22:59Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.929
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1446
- Accuracy: 0.929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9345 | 1.0 | 500 | 0.2509 | 0.918 |
| 0.1855 | 2.0 | 1000 | 0.1626 | 0.928 |
| 0.1036 | 3.0 | 1500 | 0.1446 | 0.929 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Leisa/distilbert-base-uncased-finetuned-imdb | Leisa | 2021-11-20T12:12:24Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5561 | 1.0 | 782 | 2.3738 |
| 2.4474 | 2.0 | 1564 | 2.3108 |
| 2.4037 | 3.0 | 2346 | 2.3017 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Subsets and Splits