Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,186 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
pipeline_tag: image-segmentation
|
4 |
+
tags:
|
5 |
+
- medical
|
6 |
+
- biology
|
7 |
+
- histology
|
8 |
+
- histopathology
|
9 |
+
---
|
10 |
+
|
11 |
+
# CPP-Net Model for Cervical Intraepithelial Neoplasia 2 (CIN2) Panoptic Segmentation
|
12 |
+
|
13 |
+
# Model
|
14 |
+
- **histolytics** implementation of panoptic **CPP-Net**: [https://arxiv.org/abs/2102.06867](https://arxiv.org/abs/2102.06867)
|
15 |
+
- Backbone encoder: pre-trained **efficientnet_b5** from pytorch-image-models [https://github.com/huggingface/pytorch-image-models](https://github.com/huggingface/pytorch-image-models)
|
16 |
+
|
17 |
+
|
18 |
+
# USAGE
|
19 |
+
|
20 |
+
## 1. Install histolytics and albumentations
|
21 |
+
```
|
22 |
+
pip install histolytics
|
23 |
+
pip install albumentations
|
24 |
+
```
|
25 |
+
|
26 |
+
## 2. Load trained model
|
27 |
+
```python
|
28 |
+
from histolytics.models.cppnet_panoptic import CPPNetPanoptic
|
29 |
+
|
30 |
+
model = CPPNetPanoptic.from_pretrained("hgsc_v1_efficientnet_b5")
|
31 |
+
```
|
32 |
+
|
33 |
+
## 3. Run inference for one image
|
34 |
+
```python
|
35 |
+
from albumentations import Resize, Compose
|
36 |
+
from histolytics.utils import FileHandler
|
37 |
+
from histolytics.transforms.albu_transforms import MinMaxNormalization
|
38 |
+
|
39 |
+
model.set_inference_mode()
|
40 |
+
|
41 |
+
# Resize to multiple of 32 of your own choosing
|
42 |
+
transform = Compose([Resize(1024, 1024), MinMaxNormalization()])
|
43 |
+
|
44 |
+
im = FileHandler.read_img(IMG_PATH)
|
45 |
+
im = transform(image=im)["image"]
|
46 |
+
|
47 |
+
prob = model.predict(im)
|
48 |
+
out = model.post_process(prob)
|
49 |
+
# out = {"nuc": [(nuc instances (H, W), nuc types (H, W))], "tissue": [tissues (H, W)], "cyto": None}
|
50 |
+
```
|
51 |
+
|
52 |
+
## 3.1 Run inference for image batch
|
53 |
+
```python
|
54 |
+
import torch
|
55 |
+
from histolytics.utils import FileHandler
|
56 |
+
|
57 |
+
model.set_inference_mode()
|
58 |
+
|
59 |
+
# dont use random matrices IRL
|
60 |
+
batch = torch.rand(8, 3, 1024, 1024)
|
61 |
+
|
62 |
+
prob = model.predict(im)
|
63 |
+
out = model.post_process(prob)
|
64 |
+
# out = {
|
65 |
+
# "nuc": [
|
66 |
+
# (nuc instances (H, W), nuc types (H, W)),
|
67 |
+
# (nuc instances (H, W), nuc types (H, W)),
|
68 |
+
# .
|
69 |
+
# .
|
70 |
+
# .
|
71 |
+
# (nuc instances (H, W), nuc types (H, W))
|
72 |
+
# ],
|
73 |
+
# "tissue": [
|
74 |
+
# (nuc instances (H, W), nuc types (H, W)),
|
75 |
+
# (nuc instances (H, W), nuc types (H, W)),
|
76 |
+
# .
|
77 |
+
# .
|
78 |
+
# .
|
79 |
+
# (nuc instances (H, W), nuc types (H, W))
|
80 |
+
# ],
|
81 |
+
# "cyto": None,
|
82 |
+
#}
|
83 |
+
```
|
84 |
+
|
85 |
+
## 4. Visualize output
|
86 |
+
```python
|
87 |
+
from matplotlib import pyplot as plt
|
88 |
+
from skimage.color import label2rgb
|
89 |
+
|
90 |
+
fig, ax = plt.subplots(1, 4, figsize=(24, 6))
|
91 |
+
ax[0].imshow(im)
|
92 |
+
ax[1].imshow(label2rgb(out["nuc"][0][0], bg_label=0)) # inst_map
|
93 |
+
ax[2].imshow(label2rgb(out["nuc"][0][1], bg_label=0)) # type_map
|
94 |
+
ax[3].imshow(label2rgb(out["tissue"][0], bg_label=0)) # tissue_map
|
95 |
+
```
|
96 |
+
<!--  -->
|
97 |
+
|
98 |
+
## Dataset Details
|
99 |
+
Semi-manually annotated CIN2 samples from a (private) cohort of Helsinki University Hospital
|
100 |
+
|
101 |
+
**Contains:**
|
102 |
+
- 370 varying sized image crops at 20x magnification.
|
103 |
+
- 168 640 annotated nuclei
|
104 |
+
- 570 872 983 pixels of annotated tissue region
|
105 |
+
|
106 |
+
## Dataset classes
|
107 |
+
|
108 |
+
```
|
109 |
+
nuc_classes = {
|
110 |
+
0: "background",
|
111 |
+
1: "neoplastic",
|
112 |
+
2: "inflammatory",
|
113 |
+
3: "connective",
|
114 |
+
4: "dead",
|
115 |
+
5: "glandular_epithelial",
|
116 |
+
6: "squamous_epithelial",
|
117 |
+
}
|
118 |
+
|
119 |
+
tissue_classes = {
|
120 |
+
0: "background",
|
121 |
+
1: "stroma",
|
122 |
+
2: "cin",
|
123 |
+
3: "squamous_epithelium",
|
124 |
+
4: "glandular_epithelium",
|
125 |
+
5: "slime",
|
126 |
+
6: "blood",
|
127 |
+
}
|
128 |
+
```
|
129 |
+
|
130 |
+
## Dataset Class Distribution
|
131 |
+
|
132 |
+
**Nuclei**:
|
133 |
+
- connective nuclei: 46 222 (~27.3%)
|
134 |
+
- neoplastic nuclei: 49 493 (~29.4%)
|
135 |
+
- inflammatory nuclei 27 226 (~16.1%)
|
136 |
+
- dead nuclei 195 (~0.11%)
|
137 |
+
- glandular epithelial 14 310 (~8.5%)
|
138 |
+
- squamous epithelial 31194 (~18.5%)
|
139 |
+
|
140 |
+
**Tissues**:
|
141 |
+
- stromal tissue: 28.2%
|
142 |
+
- CIN tissue: 23.4%
|
143 |
+
- squamous epithelium: 24.7%
|
144 |
+
- glandular epithelium 7.7%
|
145 |
+
- slime 6.5%
|
146 |
+
- blood 2.5%
|
147 |
+
|
148 |
+
# Model Training Details:
|
149 |
+
First, the image crops in the training data were tiled into 224x224px patches with a sliding window (stride=32px).
|
150 |
+
|
151 |
+
Rest of the training procedures follow this notebook: [link]
|
152 |
+
|
153 |
+
# Citation
|
154 |
+
|
155 |
+
histolytics:
|
156 |
+
```
|
157 |
+
@article{
|
158 |
+
|
159 |
+
}
|
160 |
+
```
|
161 |
+
|
162 |
+
CPP-Net original paper:
|
163 |
+
```
|
164 |
+
@article{https://doi.org/10.48550/arxiv.2102.06867,
|
165 |
+
doi = {10.48550/ARXIV.2102.06867},
|
166 |
+
url = {https://arxiv.org/abs/2102.06867},
|
167 |
+
author = {Chen, Shengcong and Ding, Changxing and Liu, Minfeng and Cheng, Jun and Tao, Dacheng},
|
168 |
+
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
|
169 |
+
title = {CPP-Net: Context-aware Polygon Proposal Network for Nucleus Segmentation},
|
170 |
+
publisher = {arXiv},
|
171 |
+
year = {2021},
|
172 |
+
copyright = {arXiv.org perpetual, non-exclusive license}
|
173 |
+
}
|
174 |
+
```
|
175 |
+
|
176 |
+
## Licence
|
177 |
+
These model weights are released under the Apache License, Version 2.0 (the "License"). You may obtain a copy of the License at:
|
178 |
+
|
179 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
180 |
+
|
181 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
|
182 |
+
|
183 |
+
## Additional Terms
|
184 |
+
|
185 |
+
While the Apache 2.0 License grants broad permissions, we kindly request that users adhere to the following guidelines:
|
186 |
+
Medical or Clinical Use: This model is not intended for use in medical diagnosis, treatment, or prevention of disease of real patients. It should not be used as a substitute for professional medical advice.
|