Commit
·
403d8e3
1
Parent(s):
94b8b63
Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
---
|
2 |
datasets:
|
3 |
- Siddharth63/biological_dataset
|
4 |
-
license:
|
5 |
---
|
6 |
# Bioul2-tiny-nl6
|
7 |
|
@@ -39,5 +39,4 @@ This model was only pretrained in a self-supervised way excluding any supervised
|
|
39 |
Note: For fine-tuning, most likely you can get better results if you insert a prefix token of [NLU], [NLG], or [S2S] to your input texts. For general language understanding fine-tuning tasks, you could use the [NLU] token. For GPT-style causal language generation, you could use the [S2S] token. The token [NLG] of the X-denoising pretrain task is somewhat mix between the language understanding and causal language generation so the token [NLG] could maybe be used for language generation fine-tuning too.
|
40 |
|
41 |
## Acknowledgements
|
42 |
-
This project would not have been possible without compute generously provided by Google through the [Google TPU Research Cloud](https://sites.research.google/trc/about/). Thanks to the [Finnish-NLP](https://huggingface.co/Finnish-NLP) authors for releasing their code for the UL2 objective, associated task definitions and their guidance. Thanks to [Yeb Havinga](https://huggingface.co/yhavinga) for helping me get started with the t5x framework.
|
43 |
-
|
|
|
1 |
---
|
2 |
datasets:
|
3 |
- Siddharth63/biological_dataset
|
4 |
+
license: apache-2.0
|
5 |
---
|
6 |
# Bioul2-tiny-nl6
|
7 |
|
|
|
39 |
Note: For fine-tuning, most likely you can get better results if you insert a prefix token of [NLU], [NLG], or [S2S] to your input texts. For general language understanding fine-tuning tasks, you could use the [NLU] token. For GPT-style causal language generation, you could use the [S2S] token. The token [NLG] of the X-denoising pretrain task is somewhat mix between the language understanding and causal language generation so the token [NLG] could maybe be used for language generation fine-tuning too.
|
40 |
|
41 |
## Acknowledgements
|
42 |
+
This project would not have been possible without compute generously provided by Google through the [Google TPU Research Cloud](https://sites.research.google/trc/about/). Thanks to the [Finnish-NLP](https://huggingface.co/Finnish-NLP) authors for releasing their code for the UL2 objective, associated task definitions and their guidance. Thanks to [Yeb Havinga](https://huggingface.co/yhavinga) for helping me get started with the t5x framework.
|
|