File size: 5,222 Bytes
24505ed
 
 
 
 
 
f4bf76e
 
24505ed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
74b04a7
24505ed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f4bf76e
24505ed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
---
license: apache-2.0
pipeline_tag: image-segmentation
tags:
- medical
- biology
- histology
- histopathology
---

# CPP-Net Model for High-Grade Serous Ovarian Cancer Panoptic Segmentation

# Model
- **histolytics** implementation of panoptic **CPP-Net**: [https://arxiv.org/abs/2102.06867](https://arxiv.org/abs/2102.06867)
- Backbone encoder: pre-trained **efficientnet_b5** from pytorch-image-models [https://github.com/huggingface/pytorch-image-models](https://github.com/huggingface/pytorch-image-models)


# USAGE

## 1. Install histolytics and albumentations
```
pip install histolytics
pip install albumentations
```

## 2. Load trained model
```python
from histolytics.models.cppnet_panoptic import CPPNetPanoptic

model = CPPNetPanoptic.from_pretrained("hgsc_v1_efficientnet_b5")
```

## 3. Run inference for one image
```python
from albumentations import Resize, Compose
from histolytics.utils import FileHandler
from histolytics.transforms.albu_transforms import MinMaxNormalization

model.set_inference_mode()

# Resize to multiple of 32 of your own choosing
transform = Compose([Resize(1024, 1024), MinMaxNormalization()])

im = FileHandler.read_img(IMG_PATH)
im = transform(image=im)["image"]

prob = model.predict(im)
out = model.post_process(prob)
# out = {"nuc": [(nuc instances (H, W), nuc types (H, W))], "tissue": [tissues (H, W)], "cyto": None}
```

## 3.1 Run inference for image batch
```python
import torch
from histolytics.utils import FileHandler

model.set_inference_mode()

# dont use random matrices IRL
batch = torch.rand(8, 3, 1024, 1024)

prob = model.predict(im)
out = model.post_process(prob)
# out = {
#  "nuc": [
#    (nuc instances (H, W), nuc types (H, W)),
#    (nuc instances (H, W), nuc types (H, W)),
#    .
#    .
#    .
#    (nuc instances (H, W), nuc types (H, W))    
#  ],
#  "tissue": [
#    (nuc instances (H, W), nuc types (H, W)),
#    (nuc instances (H, W), nuc types (H, W)),
#    .
#    .
#    .
#    (nuc instances (H, W), nuc types (H, W))    
#  ],
#  "cyto": None,
#}
```

## 4. Visualize output
```python
from matplotlib import pyplot as plt
from skimage.color import label2rgb

fig, ax = plt.subplots(1, 4, figsize=(24, 6))
ax[0].imshow(im)
ax[1].imshow(label2rgb(out["nuc"][0][0], bg_label=0)) # inst_map
ax[2].imshow(label2rgb(out["nuc"][0][1], bg_label=0)) # type_map
ax[3].imshow(label2rgb(out["tissue"][0], bg_label=0)) # tissue_map
```
![out](cppnet_out_pan.png)

## Dataset Details
Semi-manually annotated HGSC Primary Omental samples from the (private) DECIDER cohort. Data acquired in the DECIDER project, 
funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement No 965193.

**Contains:**
- 198 varying sized image crops at 20x magnification.
- 98 468 annotated nuclei
- 699 744 885 pixels of annotated tissue region

## Dataset classes

```
nuc_classes = {
    0: "background",
    1: "neoplastic",
    2: "inflammatory",
    3: "connective",
    4: "dead",
    5: "macrophage_cell",
    6: "macrophage_nuc",
}

tissue_classes = {
    0: "background",
    1: "stroma",
    2: "omental_fat",
    3: "tumor",
    4: "hemorrage",
    5: "necrosis",
    6: "serum",
}
```

## Dataset Class Distribution
**Nuclei**:
- connective nuclei: 46 100 (~47%)
- neoplastic nuclei: 22 761 (~23%)
- inflammatory nuclei 19 185 (~19%)
- dead nuclei 1859 (~2%)
- macrophage nuclei and cytoplasms: 4550 (~5%)

**Tissues**:
- stromal tissue: 28% 
- tumor tissue:29%
- omental fat: 20%
- hemorrhage 5%
- necrosis 13%
- serum 5%

# Model Training Details:
First, the image crops in the training data were tiled into 224x224px patches with a sliding window (stride=32px). 

Rest of the training procedures follow this notebook: [link]

# Citation

histolytics:
```
@article{

}
```

CPP-Net original paper:
```
@article{https://doi.org/10.48550/arxiv.2102.06867,
  doi = {10.48550/ARXIV.2102.06867},
  url = {https://arxiv.org/abs/2102.06867},
  author = {Chen,  Shengcong and Ding,  Changxing and Liu,  Minfeng and Cheng,  Jun and Tao,  Dacheng},
  keywords = {Computer Vision and Pattern Recognition (cs.CV),  FOS: Computer and information sciences,  FOS: Computer and information sciences},
  title = {CPP-Net: Context-aware Polygon Proposal Network for Nucleus Segmentation},
  publisher = {arXiv},
  year = {2021},
  copyright = {arXiv.org perpetual,  non-exclusive license}
}
```

## Licence
These model weights are released under the Apache License, Version 2.0 (the "License"). You may obtain a copy of the License at:

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

## Additional Terms

While the Apache 2.0 License grants broad permissions, we kindly request that users adhere to the following guidelines:
Medical or Clinical Use: This model is not intended for use in medical diagnosis, treatment, or prevention of disease of real patients. It should not be used as a substitute for professional medical advice.