Update README.md
Browse files
README.md
CHANGED
@@ -9,8 +9,8 @@ task_categories:
|
|
9 |
---
|
10 |
|
11 |
## Pan-Multiplex (Pan-M) dataset
|
12 |
-
This dataset was to train the Nimbus model for the publication "Improving cell phenotyping by factoring in spatial marker expression patterns with Nimbus".
|
13 |
-
The dataset contains multiplexed images from different modalities, tissues and protein marker panels. It was constructed by semi-automatic pipeline, where the cell types assigned by the authors of the original studies that published the data, where mapped back to their expected marker activity.
|
14 |
More details to the construction of the dataset can be found in the paper. The dataset consists of five subsets named `codex_colon`,`mibi_breast`,`mibi_decidua`,`vectra_colon`,`vectra_pancreas`, each in an individual folder.
|
15 |
After unzipping, the data should be stored in the following folder structure to use the code provided for [training](https://github.com/angelolab/Nimbus) and [inference](https://github.com/angelolab/Nimbus-Inference). To construct the binary segmentation maps used for training, you can use the code in `segmentation_data_prep.py` and `simple_data_prep.py` in the [training repository](https://github.com/angelolab/Nimbus).
|
16 |
|
|
|
9 |
---
|
10 |
|
11 |
## Pan-Multiplex (Pan-M) dataset
|
12 |
+
This dataset was constructed to train the Nimbus model for the publication "Improving cell phenotyping by factoring in spatial marker expression patterns with Nimbus".
|
13 |
+
The dataset contains multiplexed images from different modalities, tissues and protein marker panels. It was constructed by a semi-automatic pipeline, where the cell types assigned by the authors of the original studies that published the data, where mapped back to their expected marker activity. In addition, for 3 FoVs of each dataset, 4 expert annotators proofread ~1.1M annotations which served as the gold-standard for assesing the algorithm.
|
14 |
More details to the construction of the dataset can be found in the paper. The dataset consists of five subsets named `codex_colon`,`mibi_breast`,`mibi_decidua`,`vectra_colon`,`vectra_pancreas`, each in an individual folder.
|
15 |
After unzipping, the data should be stored in the following folder structure to use the code provided for [training](https://github.com/angelolab/Nimbus) and [inference](https://github.com/angelolab/Nimbus-Inference). To construct the binary segmentation maps used for training, you can use the code in `segmentation_data_prep.py` and `simple_data_prep.py` in the [training repository](https://github.com/angelolab/Nimbus).
|
16 |
|