Yiheng Xu
commited on
Commit
·
b61e18b
1
Parent(s):
2192952
Update README.md
Browse files
README.md
CHANGED
|
@@ -6,7 +6,7 @@ license: cc-by-sa-4.0
|
|
| 6 |
# LayoutXLM
|
| 7 |
**Multimodal (text + layout/format + image) pre-training for document AI**
|
| 8 |
|
| 9 |
-
[Github Repository](https://github.com/microsoft/unilm/tree/master/layoutxlm)
|
| 10 |
## Introduction
|
| 11 |
LayoutXLM is a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually-rich document understanding. Experiment results show that it has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUN dataset.
|
| 12 |
|
|
|
|
| 6 |
# LayoutXLM
|
| 7 |
**Multimodal (text + layout/format + image) pre-training for document AI**
|
| 8 |
|
| 9 |
+
[Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [Github Repository](https://github.com/microsoft/unilm/tree/master/layoutxlm)
|
| 10 |
## Introduction
|
| 11 |
LayoutXLM is a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually-rich document understanding. Experiment results show that it has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUN dataset.
|
| 12 |
|