okunator commited on
Commit
fdae256
·
verified ·
1 Parent(s): ec9ddf4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +165 -3
README.md CHANGED
@@ -1,3 +1,165 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: image-segmentation
4
+ tags:
5
+ - medical
6
+ - biology
7
+ - histology
8
+ - histopathology
9
+ ---
10
+
11
+ # CPP-Net Model for Cervical Intraepithelial Neoplasia 2 (CIN2) Nuclei Segmentation
12
+
13
+ # Model
14
+ - **cellseg_models.pytorch** implementation of **CPP-Net**: [https://arxiv.org/abs/2102.06867](https://arxiv.org/abs/2102.06867)
15
+ - Backbone encoder: pre-trained **efficientnet_b5** from pytorch-image-models [https://github.com/huggingface/pytorch-image-models](https://github.com/huggingface/pytorch-image-models)
16
+
17
+
18
+ # USAGE
19
+
20
+ ## 1. Install cellseg_models.pytorch and albumentations
21
+ ```
22
+ pip install cellseg-models-pytorch
23
+ pip install albumentations
24
+ ```
25
+
26
+ ## 2. Load trained model
27
+ ```python
28
+ from cellseg_models_pytorch.models.cppnet import CPPNet
29
+
30
+ model = CPPNet.from_pretrained("hgsc_v1_efficientnet_b5")
31
+ ```
32
+
33
+ ## 3. Run inference for one image
34
+ ```python
35
+ from albumentations import Resize, Compose
36
+ from cellseg_models_pytorch.utils import FileHandler
37
+ from cellseg_models_pytorch.transforms.albu_transforms import MinMaxNormalization
38
+
39
+ model.set_inference_mode()
40
+
41
+ # Resize to multiple of 32 of your own choosing
42
+ transform = Compose([Resize(1024, 1024), MinMaxNormalization()])
43
+
44
+ im = FileHandler.read_img(IMG_PATH)
45
+ im = transform(image=im)["image"]
46
+
47
+ prob = model.predict(im)
48
+ out = model.post_process(prob)
49
+ # out = {"nuc": [(nuc instances (H, W), nuc types (H, W))], "cyto": None, "tissue": None}
50
+ ```
51
+
52
+ ## 3.1 Run inference for image batch
53
+ ```python
54
+ import torch
55
+ from cellseg_models_pytorch.utils import FileHandler
56
+
57
+ model.set_inference_mode()
58
+
59
+ # dont use random matrices IRL
60
+ batch = torch.rand(8, 3, 1024, 1024)
61
+
62
+ prob = model.predict(im)
63
+ out = model.post_process(prob)
64
+ # out = {
65
+ # "nuc": [
66
+ # (nuc instances (H, W), nuc types (H, W)),
67
+ # (nuc instances (H, W), nuc types (H, W)),
68
+ # .
69
+ # .
70
+ # .
71
+ # (nuc instances (H, W), nuc types (H, W))
72
+ # ],
73
+ # "cyto": None,
74
+ # "tissue": None
75
+ #}
76
+ ```
77
+
78
+ ## 4. Visualize output
79
+ ```python
80
+ from matplotlib import pyplot as plt
81
+ from skimage.color import label2rgb
82
+
83
+ fig, ax = plt.subplots(1, 3, figsize=(18, 6))
84
+ ax[0].imshow(im)
85
+ ax[1].imshow(label2rgb(out["nuc"][0][0], bg_label=0)) # inst_map
86
+ ax[2].imshow(label2rgb(out["nuc"][0][1], bg_label=0)) # type_map
87
+ ```
88
+ <!-- ![out](cppnet_out.png) -->
89
+
90
+ ## Dataset Details
91
+ Semi-manually annotated CIN2 samples from a (private) cohort of Helsinki University Hospital
92
+
93
+ **Contains:**
94
+ - 370 varying sized image crops at 20x magnification.
95
+ - 168 640 annotated nuclei
96
+
97
+ ## Dataset classes
98
+
99
+ ```
100
+ nuc_classes = {
101
+ 0: "background",
102
+ 1: "neoplastic",
103
+ 2: "inflammatory",
104
+ 3: "connective",
105
+ 4: "dead",
106
+ 5: "glandular_epithelial",
107
+ 6: "squamous_epithelial",
108
+ }
109
+ ```
110
+
111
+ ## Dataset Class Distribution
112
+
113
+ - connective nuclei: 46 222 (~27.3%)
114
+ - neoplastic nuclei: 49 493 (~29.4%)
115
+ - inflammatory nuclei 27 226 (~16.1%)
116
+ - dead nuclei 195 (~0.11%)
117
+ - glandular epithelial 14 310 (~8.5%)
118
+ - squamous epithelial 31194 (~18.5%)
119
+
120
+
121
+ # Model Training Details:
122
+ First, the image crops in the training data were tiled into 224x224px patches with a sliding window (stride=32px).
123
+
124
+ Rest of the training procedures follow this notebook: [link]
125
+
126
+ # Citation
127
+
128
+ cellseg_models.pytorch:
129
+ ```
130
+ @misc{https://doi.org/10.5281/zenodo.12666959,
131
+ doi = {10.5281/ZENODO.12666959},
132
+ url = {https://zenodo.org/doi/10.5281/zenodo.12666959},
133
+ author = {Okunator, },
134
+ title = {okunator/cellseg_models.pytorch: v0.2.0},
135
+ publisher = {Zenodo},
136
+ year = {2024},
137
+ copyright = {Creative Commons Attribution 4.0 International}
138
+ }
139
+ ```
140
+
141
+ CPP-Net original paper:
142
+ ```
143
+ @article{https://doi.org/10.48550/arxiv.2102.06867,
144
+ doi = {10.48550/ARXIV.2102.06867},
145
+ url = {https://arxiv.org/abs/2102.06867},
146
+ author = {Chen, Shengcong and Ding, Changxing and Liu, Minfeng and Cheng, Jun and Tao, Dacheng},
147
+ keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
148
+ title = {CPP-Net: Context-aware Polygon Proposal Network for Nucleus Segmentation},
149
+ publisher = {arXiv},
150
+ year = {2021},
151
+ copyright = {arXiv.org perpetual, non-exclusive license}
152
+ }
153
+ ```
154
+
155
+ ## Licence
156
+ These model weights are released under the Apache License, Version 2.0 (the "License"). You may obtain a copy of the License at:
157
+
158
+ http://www.apache.org/licenses/LICENSE-2.0
159
+
160
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
161
+
162
+ ## Additional Terms
163
+
164
+ While the Apache 2.0 License grants broad permissions, we kindly request that users adhere to the following guidelines:
165
+ Medical or Clinical Use: This model is not intended for use in medical diagnosis, treatment, or prevention of disease of real patients. It should not be used as a substitute for professional medical advice.