Update README.md
#1
by
moonyeonjun
- opened
README.md
CHANGED
@@ -23,7 +23,7 @@ library_name: transformers
|
|
23 |
Phikon-v2 is a Vision Transformer Large pre-trained with Dinov2 self-supervised method on PANCAN-XL, a dataset of 450M 20x magnification histology images sampled from 60K whole slide images.
|
24 |
PANCAN-XL only incorporates publicly available datasets: CPTAC (6,193 WSI) and TCGA (29,502 WSI) for malignant tissue, and GTEx for normal tissue (13,302 WSI).
|
25 |
|
26 |
-
Phikon-v2 improves upon [Phikon](https://huggingface.co/owkin/phikon), our previous
|
27 |
Phikon-v2 is evaluated on external cohorts to avoid any data contamination with PANCAN-XL pre-training dataset, and benchmarked against an exhaustive panel of representation learning and foundation models.
|
28 |
|
29 |
## Model Description
|
|
|
23 |
Phikon-v2 is a Vision Transformer Large pre-trained with Dinov2 self-supervised method on PANCAN-XL, a dataset of 450M 20x magnification histology images sampled from 60K whole slide images.
|
24 |
PANCAN-XL only incorporates publicly available datasets: CPTAC (6,193 WSI) and TCGA (29,502 WSI) for malignant tissue, and GTEx for normal tissue (13,302 WSI).
|
25 |
|
26 |
+
Phikon-v2 improves upon [Phikon](https://huggingface.co/owkin/phikon), our previous foundation model pre-trained with iBOT on 40M histology images from TCGA (6k WSI), on a large variety of weakly-supervised tasks tailored for biomarker discovery.
|
27 |
Phikon-v2 is evaluated on external cohorts to avoid any data contamination with PANCAN-XL pre-training dataset, and benchmarked against an exhaustive panel of representation learning and foundation models.
|
28 |
|
29 |
## Model Description
|