Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ license: bigcode-openrail-m
|
|
9 |
|
10 |
<!-- Provide a quick summary of what the model is/does. -->
|
11 |
|
12 |
-
ModularStarEncoder-1B is an encoder pre-trained on [The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2-train).
|
13 |
ModularStarEncoder is a modular pre-trained encoder with five exit points, allowing users to perform multiple exit fine-tuning depending on downstream tasks.
|
14 |
We built ModularStarEncoder on top of [StarCoder-2](https://huggingface.co/bigcode/starcoder2-15b), reducing its size from 15B to 1B parameters in bfloat16.
|
15 |
Our architecture consists of 36 hidden layers, each with 16 attention heads and 4 key-value heads, using Grouped Query Attention (GQA).
|
|
|
9 |
|
10 |
<!-- Provide a quick summary of what the model is/does. -->
|
11 |
|
12 |
+
ModularStarEncoder-1B (MoSE) is an encoder pre-trained on [The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2-train).
|
13 |
ModularStarEncoder is a modular pre-trained encoder with five exit points, allowing users to perform multiple exit fine-tuning depending on downstream tasks.
|
14 |
We built ModularStarEncoder on top of [StarCoder-2](https://huggingface.co/bigcode/starcoder2-15b), reducing its size from 15B to 1B parameters in bfloat16.
|
15 |
Our architecture consists of 36 hidden layers, each with 16 attention heads and 4 key-value heads, using Grouped Query Attention (GQA).
|