File size: 4,927 Bytes
fdae256 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 |
---
license: apache-2.0
pipeline_tag: image-segmentation
tags:
- medical
- biology
- histology
- histopathology
---
# CPP-Net Model for Cervical Intraepithelial Neoplasia 2 (CIN2) Nuclei Segmentation
# Model
- **cellseg_models.pytorch** implementation of **CPP-Net**: [https://arxiv.org/abs/2102.06867](https://arxiv.org/abs/2102.06867)
- Backbone encoder: pre-trained **efficientnet_b5** from pytorch-image-models [https://github.com/huggingface/pytorch-image-models](https://github.com/huggingface/pytorch-image-models)
# USAGE
## 1. Install cellseg_models.pytorch and albumentations
```
pip install cellseg-models-pytorch
pip install albumentations
```
## 2. Load trained model
```python
from cellseg_models_pytorch.models.cppnet import CPPNet
model = CPPNet.from_pretrained("hgsc_v1_efficientnet_b5")
```
## 3. Run inference for one image
```python
from albumentations import Resize, Compose
from cellseg_models_pytorch.utils import FileHandler
from cellseg_models_pytorch.transforms.albu_transforms import MinMaxNormalization
model.set_inference_mode()
# Resize to multiple of 32 of your own choosing
transform = Compose([Resize(1024, 1024), MinMaxNormalization()])
im = FileHandler.read_img(IMG_PATH)
im = transform(image=im)["image"]
prob = model.predict(im)
out = model.post_process(prob)
# out = {"nuc": [(nuc instances (H, W), nuc types (H, W))], "cyto": None, "tissue": None}
```
## 3.1 Run inference for image batch
```python
import torch
from cellseg_models_pytorch.utils import FileHandler
model.set_inference_mode()
# dont use random matrices IRL
batch = torch.rand(8, 3, 1024, 1024)
prob = model.predict(im)
out = model.post_process(prob)
# out = {
# "nuc": [
# (nuc instances (H, W), nuc types (H, W)),
# (nuc instances (H, W), nuc types (H, W)),
# .
# .
# .
# (nuc instances (H, W), nuc types (H, W))
# ],
# "cyto": None,
# "tissue": None
#}
```
## 4. Visualize output
```python
from matplotlib import pyplot as plt
from skimage.color import label2rgb
fig, ax = plt.subplots(1, 3, figsize=(18, 6))
ax[0].imshow(im)
ax[1].imshow(label2rgb(out["nuc"][0][0], bg_label=0)) # inst_map
ax[2].imshow(label2rgb(out["nuc"][0][1], bg_label=0)) # type_map
```
<!--  -->
## Dataset Details
Semi-manually annotated CIN2 samples from a (private) cohort of Helsinki University Hospital
**Contains:**
- 370 varying sized image crops at 20x magnification.
- 168 640 annotated nuclei
## Dataset classes
```
nuc_classes = {
0: "background",
1: "neoplastic",
2: "inflammatory",
3: "connective",
4: "dead",
5: "glandular_epithelial",
6: "squamous_epithelial",
}
```
## Dataset Class Distribution
- connective nuclei: 46 222 (~27.3%)
- neoplastic nuclei: 49 493 (~29.4%)
- inflammatory nuclei 27 226 (~16.1%)
- dead nuclei 195 (~0.11%)
- glandular epithelial 14 310 (~8.5%)
- squamous epithelial 31194 (~18.5%)
# Model Training Details:
First, the image crops in the training data were tiled into 224x224px patches with a sliding window (stride=32px).
Rest of the training procedures follow this notebook: [link]
# Citation
cellseg_models.pytorch:
```
@misc{https://doi.org/10.5281/zenodo.12666959,
doi = {10.5281/ZENODO.12666959},
url = {https://zenodo.org/doi/10.5281/zenodo.12666959},
author = {Okunator, },
title = {okunator/cellseg_models.pytorch: v0.2.0},
publisher = {Zenodo},
year = {2024},
copyright = {Creative Commons Attribution 4.0 International}
}
```
CPP-Net original paper:
```
@article{https://doi.org/10.48550/arxiv.2102.06867,
doi = {10.48550/ARXIV.2102.06867},
url = {https://arxiv.org/abs/2102.06867},
author = {Chen, Shengcong and Ding, Changxing and Liu, Minfeng and Cheng, Jun and Tao, Dacheng},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {CPP-Net: Context-aware Polygon Proposal Network for Nucleus Segmentation},
publisher = {arXiv},
year = {2021},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
## Licence
These model weights are released under the Apache License, Version 2.0 (the "License"). You may obtain a copy of the License at:
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
## Additional Terms
While the Apache 2.0 License grants broad permissions, we kindly request that users adhere to the following guidelines:
Medical or Clinical Use: This model is not intended for use in medical diagnosis, treatment, or prevention of disease of real patients. It should not be used as a substitute for professional medical advice. |