Commit
·
4038718
1
Parent(s):
7afb08d
Update README.md
Browse files
README.md
CHANGED
@@ -45,7 +45,7 @@ Our final vocab file availabe at [https://github.com/sagorbrur/bangla-bert](http
|
|
45 |
* Bangla-Bert was trained with code provided in Google BERT's github repository (https://github.com/google-research/bert)
|
46 |
* Currently released model follows bert-base-uncased model architecture (12-layer, 768-hidden, 12-heads, 110M parameters)
|
47 |
* Total Training Steps: 1 Million
|
48 |
-
* The model was trained on a single Google Cloud
|
49 |
|
50 |
## Evaluation Results
|
51 |
|
@@ -142,6 +142,11 @@ for pred in nlp(f"আমি বাংলায় {nlp.tokenizer.mask_token} গা
|
|
142 |
## Reference
|
143 |
* https://github.com/google-research/bert
|
144 |
|
|
|
|
|
|
|
|
|
|
|
145 |
## Citation
|
146 |
If you find this model helpful, please cite.
|
147 |
|
|
|
45 |
* Bangla-Bert was trained with code provided in Google BERT's github repository (https://github.com/google-research/bert)
|
46 |
* Currently released model follows bert-base-uncased model architecture (12-layer, 768-hidden, 12-heads, 110M parameters)
|
47 |
* Total Training Steps: 1 Million
|
48 |
+
* The model was trained on a single Google Cloud GPU
|
49 |
|
50 |
## Evaluation Results
|
51 |
|
|
|
142 |
## Reference
|
143 |
* https://github.com/google-research/bert
|
144 |
|
145 |
+
## Acknowledgements
|
146 |
+
|
147 |
+
* Thanks to Google [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) for providing the free GPU credits - thank you!
|
148 |
+
* Thank to all the people around, who always helping us to build something for Bengali.
|
149 |
+
|
150 |
## Citation
|
151 |
If you find this model helpful, please cite.
|
152 |
|