Text2Text Generation
Transformers
Safetensors
German
encoder-decoder
Inference Endpoints
File size: 1,614 Bytes
9c73a6f
589c897
 
9c73a6f
589c897
 
 
 
b46e01f
 
 
9c73a6f
589c897
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1f88bd6
589c897
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
tags:
- text2text-generation
license: mit
datasets:
- CohereForAI/aya_dataset
- CohereForAI/aya_collection_language_split
- MBZUAI/Bactrian-X
language:
- de
pipeline_tag: text2text-generation
---
# Model Card of germanInstructionBERTcased for Bertology

A minimalistic german instruction model with an already good analyzed and pretrained encoder like dbmdz/bert-base-german-cased.
So we can research the [Bertology](https://aclanthology.org/2020.tacl-1.54.pdf) with instruction-tuned models, [look at the attention](https://colab.research.google.com/drive/1mNP7c0RzABnoUgE6isq8FTp-NuYNtrcH?usp=sharing) and investigate [what happens to BERT embeddings during fine-tuning](https://aclanthology.org/2020.blackboxnlp-1.4.pdf).

The training code is released at the [instructionBERT repository](https://gitlab.com/Bachstelze/instructionbert).
We used the Huggingface API for [warm-starting](https://huggingface.co/blog/warm-starting-encoder-decoder) [BertGeneration](https://huggingface.co/docs/transformers/model_doc/bert-generation) with [Encoder-Decoder-Models](https://huggingface.co/docs/transformers/v4.35.2/en/model_doc/encoder-decoder) for this purpose.

## Training parameters

- base model: "dbmdz/bert-base-german-cased"
- trained for 3 epochs
- batch size of 16
- 40000 warm-up steps
- learning rate of 0.0001

## Purpose of germanInstructionBERTcased
InstructionMBERT is intended for research purposes. The model-generated text should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.