Migrate model card from transformers-repo
Browse filesRead announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/jcblaise/bert-tagalog-base-uncased-WWM/README.md
README.md
ADDED
|
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language: tl
|
| 3 |
+
tags:
|
| 4 |
+
- bert
|
| 5 |
+
- tagalog
|
| 6 |
+
- filipino
|
| 7 |
+
license: gpl-3.0
|
| 8 |
+
inference: false
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
# BERT Tagalog Base Uncased (Whole Word Masking)
|
| 12 |
+
Tagalog version of BERT trained on a large preprocessed text corpus scraped and sourced from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community. This particular version uses whole word masking.
|
| 13 |
+
|
| 14 |
+
## Usage
|
| 15 |
+
The model can be loaded and used in both PyTorch and TensorFlow through the HuggingFace Transformers package.
|
| 16 |
+
|
| 17 |
+
```python
|
| 18 |
+
from transformers import TFAutoModel, AutoModel, AutoTokenizer
|
| 19 |
+
|
| 20 |
+
# TensorFlow
|
| 21 |
+
model = TFAutoModel.from_pretrained('jcblaise/bert-tagalog-base-uncased-WWM', from_pt=True)
|
| 22 |
+
tokenizer = AutoTokenizer.from_pretrained('jcblaise/bert-tagalog-base-uncased-WWM', do_lower_case=True)
|
| 23 |
+
|
| 24 |
+
# PyTorch
|
| 25 |
+
model = AutoModel.from_pretrained('jcblaise/bert-tagalog-base-uncased-WWM')
|
| 26 |
+
tokenizer = AutoTokenizer.from_pretrained('jcblaise/bert-tagalog-base-uncased-WWM', do_lower_case=True)
|
| 27 |
+
```
|
| 28 |
+
Finetuning scripts and other utilities we use for our projects can be found in our centralized repository at https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
|
| 29 |
+
|
| 30 |
+
## Citations
|
| 31 |
+
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
|
| 32 |
+
|
| 33 |
+
```
|
| 34 |
+
@inproceedings{localization2020cruz,
|
| 35 |
+
title={{Localization of Fake News Detection via Multitask Transfer Learning}},
|
| 36 |
+
author={Cruz, Jan Christian Blaise and Tan, Julianne Agatha and Cheng, Charibeth},
|
| 37 |
+
booktitle={Proceedings of The 12th Language Resources and Evaluation Conference},
|
| 38 |
+
pages={2589--2597},
|
| 39 |
+
year={2020},
|
| 40 |
+
url={https://www.aclweb.org/anthology/2020.lrec-1.315}
|
| 41 |
+
}
|
| 42 |
+
|
| 43 |
+
@article{cruz2020establishing,
|
| 44 |
+
title={Establishing Baselines for Text Classification in Low-Resource Languages},
|
| 45 |
+
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
|
| 46 |
+
journal={arXiv preprint arXiv:2005.02068},
|
| 47 |
+
year={2020}
|
| 48 |
+
}
|
| 49 |
+
|
| 50 |
+
@article{cruz2019evaluating,
|
| 51 |
+
title={Evaluating Language Model Finetuning Techniques for Low-resource Languages},
|
| 52 |
+
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
|
| 53 |
+
journal={arXiv preprint arXiv:1907.00409},
|
| 54 |
+
year={2019}
|
| 55 |
+
}
|
| 56 |
+
```
|
| 57 |
+
|
| 58 |
+
## Data and Other Resources
|
| 59 |
+
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
|
| 60 |
+
|
| 61 |
+
## Contact
|
| 62 |
+
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|