Update README.md
Browse files
README.md
CHANGED
@@ -14,8 +14,4 @@ Checkout the full technical report for all the info ! <br>
|
|
14 |
Checkout the training library to express your creativity!
|
15 |
|
16 |
## Abstract
|
17 |
-
General-purpose multilingual vector representations, used in retrieval, regression and classification, are traditionally obtained from bidirectional encoder models. Despite their wide applicability, encoders have been recently overshadowed by advances in generative decoder-only models. However, many innovations driving this progress are not inherently tied to decoders. In this paper, we revisit the development of multilingual encoders through the lens of these advances, and introduce EuroBERT, a family of multilingual encoders covering European and widely spoken global languages. Our models outperform existing alternatives across a diverse range of tasks, spanning multilingual capabilities, mathematics, and coding, and natively supporting sequences of up to 8,192 tokens. We also examine the design decisions behind EuroBERT, offering insights into our dataset composition and training pipeline. We publicly release the EuroBERT models, including intermediate training checkpoints, together with our training framework.
|
18 |
-
|
19 |
-
## Contributors
|
20 |
-
|
21 |
-
This project was made possible through the collaboration between the MICS laboratory at CentraleSupélec, Diabolocom, Artefact, and Unbabel, as well as the technological support of AMD and CINES. We also highlight the support of the French government through the France 2030 program as part of the ArGiMi project and DataIA Institute, whose contributions facilitated the completion of this work.
|
|
|
14 |
Checkout the training library to express your creativity!
|
15 |
|
16 |
## Abstract
|
17 |
+
General-purpose multilingual vector representations, used in retrieval, regression and classification, are traditionally obtained from bidirectional encoder models. Despite their wide applicability, encoders have been recently overshadowed by advances in generative decoder-only models. However, many innovations driving this progress are not inherently tied to decoders. In this paper, we revisit the development of multilingual encoders through the lens of these advances, and introduce EuroBERT, a family of multilingual encoders covering European and widely spoken global languages. Our models outperform existing alternatives across a diverse range of tasks, spanning multilingual capabilities, mathematics, and coding, and natively supporting sequences of up to 8,192 tokens. We also examine the design decisions behind EuroBERT, offering insights into our dataset composition and training pipeline. We publicly release the EuroBERT models, including intermediate training checkpoints, together with our training framework.
|
|
|
|
|
|
|
|