Ashutosh Pathak commited on
Commit
c0f21d6
·
0 Parent(s):

Add the project

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
README.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Deep-Learning-liver-segmentation-project
2
+ <em>Final project of the Deep Learning course followed at <a href='https://www.imt-atlantique.fr/en'>IMT Atlantique</a>.</em></br></br>
3
+ <p>The code of this project is largely inspired by that of <a href='https://github.com/jocicmarko/ultrasound-nerve-segmentation'> this repository</a>, a tutorial for a kaggle competition about ultrasound image nerve segmentation. The goal of this project is to adapt the code to the segmentation of liver images as described in this article https://arxiv.org/pdf/1702.05970.pdf.
4
+ </p>
5
+
6
+ ## Data
7
+ The data to be used are available in NifTi format <a href='https://www.dropbox.com/s/hx3dehfixjdifvu/ELU-502-ircad-dataset.zip?dl=0'>here</a>.
8
+ This dataset consists of 20 medical examinations in 3D, we have the source image as well as a mask of segmentation of the liver for each of these examinations. We will use the nibabel library (http://nipy.org/nibabel/) to read associated images and masks.
9
+
10
+ ## Model
11
+ <p>We will train a U-net architecture, a fully convolutional network. The principle of this architecture is to add to a usual contracting network, layers with upsampling operators instead of pooling. This allow the network to learn context (contracting path), then localization (expansive path). Context information is propagated to higher resolution layers thanks to skip-connexions. So we have images of the same size as input</p>
12
+
13
+
14
+ <p align="center"><img src="img/u-net-architecture.png" style></img></p>
15
+
16
+
17
+ <p>in the data.py script, we perform axial cuts of our 3D images. So 256x256 images are input to the network</p>
18
+
19
+ ## Evaluation
20
+
21
+ As metric we will use the <a href='https://en.wikipedia.org/wiki/S%C3%B8rensen%E2%80%93Dice_coefficient'>Dice coefficient</a> (which is quite similar to the Jaccard coefficient)
22
+
23
+ ## How it works
24
+ <ol><li>First download the data whose link has been given previously</li>
25
+ <li>Create a 'raw' folder
26
+ <li>In the 'raw' folder, create a 'test' folder, and a 'train' folder
27
+ <li>Then separate the data in two sets (train and test, typically we use 13 samples for the train set and 7 for the test set) and put them in the corresponding directories that you can find in the 'raw' folder</li>
28
+ <li>Run data.py , this will save the train and test data in npy format</li>
29
+ <li>Finally launch the notebook, you can observe a curve of the Dice coef according to the number of epochs and visualize your predictions in the folder 'preds'</li>
30
+ </ol>
31
+ (Feel free to play with the parameters : learning rate, optimizer etc.)
32
+
33
+ ## Some results
34
+
35
+
36
+ <p>Finally we get this kind of predictions for a particular cut (thanks to the mark_boundaries function that you can find in the notebook), we can observe the liver is delimited in yellow</p>
37
+ <p align="center"><img src="img/segmentation-example1.png"></img></p>
38
+
39
+ <p>The evolution of the Dice coef for 20 epochs, this plot shows that we have consistent results and a test Dice coef reaching almost 0.87</p>
40
+ <p align="center"><img src="img/dice-20epochs-example.png"></img></p>
data.py ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import numpy as np
3
+ import nibabel
4
+ from skimage.io import imsave, imread
5
+
6
+ data_path = 'raw/'
7
+
8
+ image_rows = int(512/2)
9
+ image_cols = int(512/2) #we will undersample our training 2D images later (for memory and speed)
10
+
11
+
12
+ def create_train_data():
13
+ train_data_path = os.path.join(data_path, 'train')
14
+ images = os.listdir(train_data_path)
15
+
16
+ imgs_train=[] #training images
17
+ imgsliv_train=[] #training masks (corresponding to liver)
18
+ print(('-'*30))
19
+ print('Creating training images...')
20
+ print(('-'*30))
21
+ a=[]
22
+ b=[]
23
+ for k in range(len(images)):
24
+ if k%2==0:
25
+ a.append(np.sort(images)[k]) #file names corresponding to training masks
26
+ else:
27
+ b.append(np.sort(images)[k]) #file names corresponding to training images
28
+
29
+ for liver,orig in zip(a,b):
30
+ imgl=nibabel.load(os.path.join(train_data_path,liver)) #we load 3D training mask
31
+ imgo=nibabel.load(os.path.join(train_data_path,orig)) #we load 3D training image
32
+ for k in range(imgl.shape[2]):
33
+ dimgl=np.array(imgl.get_data()[::2,::2,k]) #axial cuts are made along the z axis with undersampling
34
+ dimgo=np.array(imgo.get_data()[::2,::2,k])
35
+ if len(np.unique(dimgl))!=1: #we only recover the 2D sections containing the liver
36
+ imgsliv_train.append(dimgl)
37
+ imgs_train.append(dimgo)
38
+
39
+
40
+ imgs = np.ndarray((len(imgs_train), image_rows, image_cols), dtype=np.uint8)
41
+ imgs_mask = np.ndarray((len(imgsliv_train), image_rows, image_cols), dtype=np.uint8)
42
+ for index,img in enumerate(imgs_train):
43
+ imgs[index,:,:]=img
44
+ for index,img in enumerate(imgsliv_train):
45
+ imgs_mask[index,:,:]=img
46
+
47
+
48
+ np.save('imgs_train.npy', imgs)
49
+ np.save('imgsliv_train.npy', imgs_mask)
50
+ print('Saving to .npy files done.')
51
+
52
+
53
+ def load_train_data():
54
+ imgs_train = np.load('imgs_train.npy')
55
+ imgs_mask_train = np.load('imgsliv_train.npy')
56
+ return imgs_train, imgs_mask_train
57
+
58
+
59
+ def create_test_data():
60
+ test_data_path = os.path.join(data_path, 'test')
61
+ images = os.listdir(test_data_path)
62
+ print(('-'*30))
63
+ print('Creating testing images...')
64
+ print(('-'*30))
65
+ imgs_test=[]
66
+ imgsliv_test=[]
67
+ for image_name in images:
68
+ print(image_name)
69
+ img=nibabel.load(os.path.join(test_data_path,image_name))
70
+ print((img.shape))
71
+ for k in range(img.shape[2]):
72
+ dimg=np.array(img.get_data()[::2,::2,k])
73
+ if 'liver' in image_name:
74
+ imgsliv_test.append(dimg)
75
+
76
+ elif 'orig' in image_name:
77
+ imgs_test.append(dimg)
78
+
79
+
80
+
81
+ imgst= np.ndarray((len(imgs_test), image_rows, image_cols), dtype=np.uint8)
82
+ imgs_maskt= np.ndarray((len(imgsliv_test), image_rows, image_cols), dtype=np.uint8)
83
+ for index,img in enumerate(imgs_test):
84
+ imgst[index,:,:]=img
85
+ for index,img in enumerate(imgsliv_test):
86
+ imgs_maskt[index,:,:]=img
87
+
88
+ np.save('imgs_test.npy', imgst)
89
+ np.save('imgsliv_test.npy', imgs_maskt)
90
+ print('Saving to .npy files done.')
91
+
92
+
93
+
94
+ def load_test_data():
95
+ imgst = np.load('imgs_test.npy')
96
+ imgs_id = np.load('imgsliv_test.npy')
97
+ return [imgst, imgs_id]
98
+
99
+ if __name__ == '__main__':
100
+ create_train_data()
101
+ create_test_data()
img/dice-20epochs-example.png ADDED
img/segmentation-example1.png ADDED
img/u-net-architecture.png ADDED
preds/0_pred.png ADDED
preds/100_pred.png ADDED
preds/101_pred.png ADDED
preds/102_pred.png ADDED
preds/103_pred.png ADDED
preds/104_pred.png ADDED
preds/105_pred.png ADDED
preds/106_pred.png ADDED
preds/107_pred.png ADDED
preds/108_pred.png ADDED
preds/109_pred.png ADDED
preds/10_pred.png ADDED
preds/110_pred.png ADDED
preds/111_pred.png ADDED
preds/112_pred.png ADDED
preds/113_pred.png ADDED
preds/114_pred.png ADDED
preds/115_pred.png ADDED
preds/116_pred.png ADDED
preds/117_pred.png ADDED
preds/118_pred.png ADDED
preds/119_pred.png ADDED
preds/11_pred.png ADDED
preds/120_pred.png ADDED
preds/121_pred.png ADDED
preds/122_pred.png ADDED
preds/123_pred.png ADDED
preds/124_pred.png ADDED
preds/125_pred.png ADDED
preds/126_pred.png ADDED
preds/127_pred.png ADDED
preds/128_pred.png ADDED
preds/129_pred.png ADDED
preds/12_pred.png ADDED
preds/130_pred.png ADDED
preds/131_pred.png ADDED
preds/132_pred.png ADDED
preds/133_pred.png ADDED
preds/134_pred.png ADDED
preds/135_pred.png ADDED
preds/136_pred.png ADDED
preds/137_pred.png ADDED
preds/138_pred.png ADDED
preds/139_pred.png ADDED
preds/13_pred.png ADDED