okunator commited on
Commit
a322611
·
verified ·
1 Parent(s): 1813bc7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +160 -3
README.md CHANGED
@@ -1,3 +1,160 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: image-segmentation
4
+ tags:
5
+ - medical
6
+ - biology
7
+ ---
8
+
9
+ # CPP-Net Model for High-Grade Serous Ovarian Cancer Nuclei Segmentation
10
+
11
+ # Model
12
+ - **cellseg_models.pytorch** implementation of **CPP-Net**: [https://arxiv.org/abs/2102.06867](https://arxiv.org/abs/2102.06867)
13
+ - Backbone encoder: pre-trained **efficientnet_b5** from pytorch-image-models [https://github.com/huggingface/pytorch-image-models](https://github.com/huggingface/pytorch-image-models)
14
+
15
+
16
+ # USAGE
17
+
18
+ ## 1. Install cellseg_models.pytorch and albumentations
19
+ ```
20
+ pip install cellseg-models-pytorch
21
+ pip install albumentations
22
+ ```
23
+
24
+ ## 2. Load trained model
25
+ ```python
26
+ from cellseg_models_pytorch.models.cppnet import CPPNet
27
+
28
+ model = CPPNet.from_pretrained("csmp-hub/cppnet-histo-hgsc-nuc-v1")
29
+ ```
30
+
31
+ ## 3. Run inference for one image
32
+ ```python
33
+ from albumentations import Resize, Compose
34
+ from cellseg_models_pytorch.utils import FileHandler
35
+ from cellseg_models_pytorch.transforms.albu_transforms import MinMaxNormalization
36
+
37
+ model.set_inference_mode()
38
+
39
+ # Resize to multiple of 32 of your own choosing
40
+ transform = Compose([Resize(1024, 1024), MinMaxNormalization()])
41
+
42
+ im = FileHandler.read_img(IMG_PATH)
43
+ im = transform(image=im)["image"]
44
+
45
+ prob = model.predict(im)
46
+ out = model.post_process(prob)
47
+ # out = {"nuc": [(nuc instances (H, W), nuc types (H, W))], "cyto": None, "tissue": None}
48
+ ```
49
+
50
+ ## 3.1 Run inference for image batch
51
+ ```python
52
+ import torch
53
+ from cellseg_models_pytorch.utils import FileHandler
54
+
55
+ model.set_inference_mode()
56
+
57
+ # dont use random matrices IRL
58
+ batch = torch.rand(8, 3, 1024, 1024)
59
+
60
+ prob = model.predict(im)
61
+ out = model.post_process(prob)
62
+ # out = {
63
+ # "nuc": [
64
+ # (nuc instances (H, W), nuc types (H, W)),
65
+ # (nuc instances (H, W), nuc types (H, W)),
66
+ # .
67
+ # .
68
+ # .
69
+ # (nuc instances (H, W), nuc types (H, W))
70
+ # ],
71
+ # "cyto": None,
72
+ # "tissue": None
73
+ #}
74
+ ```
75
+
76
+ ## 4. Visualize output
77
+ ```python
78
+ from matplotlib import pyplot as plt
79
+ from skimage.color import label2rgb
80
+
81
+ fig, ax = plt.subplots(1, 3, figsize=(18, 6))
82
+ ax[0].imshow(im)
83
+ ax[1].imshow(label2rgb(out["nuc"][0][0], bg_label=0)) # inst_map
84
+ ax[2].imshow(label2rgb(out["nuc"][0][1], bg_label=0)) # type_map
85
+ ```
86
+
87
+ ## Dataset Details
88
+ Semi-manually annotated HGSC Primary Omental samples from the (private) DECIDER cohort. Data acquired in the DECIDER project,
89
+ funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement No 965193.
90
+
91
+ **Contains:**
92
+ - 198 varying sized image crops at 20x magnification.
93
+ - 98 468 annotated nuclei
94
+
95
+ ## Dataset classes
96
+
97
+ ```
98
+ nuclei_classes = {
99
+ 0: "background",
100
+ 1: "neoplastic",
101
+ 2: "inflammatory",
102
+ 3: "connective",
103
+ 4: "dead",
104
+ 5: "macrophage_cytoplasm",
105
+ 6: "macrophage_nucleus",
106
+ }
107
+ ```
108
+
109
+ ## Dataset Class Distribution
110
+ - connective nuclei: 46 100 (~47%)
111
+ - neoplastic nuclei: 22 761 (~23%)
112
+ - inflammatory nuclei 19 185 (~19%)
113
+ - dead nuclei 1859 (~2%)
114
+ - macrophage nuclei and cytoplasms: 4550 (~5%)
115
+
116
+ # Model Training Details:
117
+ First, the image crops in the training data were tiled into 224x224px patches with a sliding window (stride=32px).
118
+
119
+ Rest of the training procedures follow this notebook: [link]
120
+
121
+ # Citation
122
+
123
+ cellseg_models.pytorch:
124
+ ```
125
+ @misc{https://doi.org/10.5281/zenodo.12666959,
126
+ doi = {10.5281/ZENODO.12666959},
127
+ url = {https://zenodo.org/doi/10.5281/zenodo.12666959},
128
+ author = {Okunator, },
129
+ title = {okunator/cellseg_models.pytorch: v0.2.0},
130
+ publisher = {Zenodo},
131
+ year = {2024},
132
+ copyright = {Creative Commons Attribution 4.0 International}
133
+ }
134
+ ```
135
+
136
+ CPP-Net original paper:
137
+ ```
138
+ @article{https://doi.org/10.48550/arxiv.2102.06867,
139
+ doi = {10.48550/ARXIV.2102.06867},
140
+ url = {https://arxiv.org/abs/2102.06867},
141
+ author = {Chen, Shengcong and Ding, Changxing and Liu, Minfeng and Cheng, Jun and Tao, Dacheng},
142
+ keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
143
+ title = {CPP-Net: Context-aware Polygon Proposal Network for Nucleus Segmentation},
144
+ publisher = {arXiv},
145
+ year = {2021},
146
+ copyright = {arXiv.org perpetual, non-exclusive license}
147
+ }
148
+ ```
149
+
150
+ ## Licence
151
+ These model weights are released under the Apache License, Version 2.0 (the "License"). You may obtain a copy of the License at:
152
+
153
+ http://www.apache.org/licenses/LICENSE-2.0
154
+
155
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
156
+
157
+ ## Additional Terms
158
+
159
+ While the Apache 2.0 License grants broad permissions, we kindly request that users adhere to the following guidelines:
160
+ Medical or Clinical Use: This model is not intended for use in medical diagnosis, treatment, or prevention of disease of real patients. It should not be used as a substitute for professional medical advice.