Text2Text Generation
Transformers
Safetensors
German
encoder-decoder
Inference Endpoints
Bachstelze commited on
Commit
589c897
·
verified ·
1 Parent(s): 4c54937

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -0
README.md CHANGED
@@ -1,3 +1,27 @@
1
  ---
 
 
2
  license: mit
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - text2text-generation
4
  license: mit
5
+ datasets:
6
+ - CohereForAI/aya_dataset
7
+ - CohereForAI/aya_collection_language_split
8
+ - MBZUAI/Bactrian-X
9
  ---
10
+ # Model Card of germanInstructionBERTcased for Bertology
11
+
12
+ A minimalistic german instruction model with an already good analyzed and pretrained encoder like dbmdz/bert-base-german-cased.
13
+ So we can research the [Bertology](https://aclanthology.org/2020.tacl-1.54.pdf) with instruction-tuned models, [look at the attention](https://colab.research.google.com/drive/1mNP7c0RzABnoUgE6isq8FTp-NuYNtrcH?usp=sharing) and investigate [what happens to BERT embeddings during fine-tuning](https://aclanthology.org/2020.blackboxnlp-1.4.pdf).
14
+
15
+ The training code is released at the [instructionBERT repository](https://gitlab.com/Bachstelze/instructionbert).
16
+ We used the Huggingface API for [warm-starting](https://huggingface.co/blog/warm-starting-encoder-decoder) [BertGeneration](https://huggingface.co/docs/transformers/model_doc/bert-generation) with [Encoder-Decoder-Models](https://huggingface.co/docs/transformers/v4.35.2/en/model_doc/encoder-decoder) for this purpose.
17
+
18
+ ## Training parameters
19
+
20
+ - base model: "dbmdz/bert-base-german-cased"
21
+ - trained for 3 epochs
22
+ - batch size of 16
23
+ - 40000 warm-up steps
24
+ - learning rate of 0.0001
25
+
26
+ ## Purpose of instructionMBERT
27
+ InstructionMBERT is intended for research purposes. The model-generated text should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.