File size: 7,892 Bytes
5742cc4 e7f6bf3 5742cc4 be00ed0 092bbd9 3468859 cd73668 106be34 be760a8 be00ed0 f3a8499 faf3b5a be00ed0 faf3b5a ccebf81 a32d368 bd05d51 ea90c07 96bc8ae ea90c07 29ee7d6 3468859 faf3b5a c1032b4 cb1b3ee 06c3a08 62d753f 63e8ec2 7e834ca 4724f8d 252de51 bd05d51 3468859 dc03340 3468859 04fe6a3 d7433ab 23660f2 d3980a6 23660f2 9450d3d b6ebbc6 bd05d51 e54a3e7 12f35c0 b92c569 e54a3e7 b92c569 e54a3e7 d479416 3468859 faf3b5a 3468859 faf3b5a e54a3e7 3468859 faf3b5a ad1ce7c 31d8e62 e54a3e7 31d8e62 c61f316 50c7618 d893701 c61f316 35a3683 57cadba c61f316 005dbe0 febe180 005dbe0 bd05d51 005dbe0 1bdaa95 e9e6a5d 1bdaa95 e9e6a5d 1bdaa95 005dbe0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 |
---
title: 'livermask: Automatic Liver Parenchyma and vessel segmentation in CT'
colorFrom: indigo
colorTo: indigo
sdk: docker
app_port: 7860
emoji: 🔎
pinned: false
license: mit
app_file: demo/app.py
---
<div align="center">
<h1 align="center">livermask</h1>
<h3 align="center">Automatic liver parenchyma and vessel segmentation in CT using deep learning</h3>
[](https://github.com/DAVFoundation/captain-n3m0/blob/master/LICENSE)
[](https://github.com/andreped/livermask/actions)
[](https://zenodo.org/badge/latestdoi/238680374)
[](https://github.com/andreped/livermask/releases)
[](https://pypi.org/project/livermask/)
**livermask** was developed by SINTEF Medical Technology to provide an open tool to accelerate research.
<img src="figures/Segmentation_3DSlicer.PNG" width="70%">
</div>
## Demo <a target="_blank" href="https://huggingface.co/spaces/andreped/livermask"><img src="https://img.shields.io/badge/🤗%20Hugging%20Face-Spaces-yellow.svg"></a>
An online version of the tool has been made openly available at Hugging Face spaces, to enable researchers to easily test the software on their own data without downloading it. To access it, click on the badge above.
## Install
A stable release is available on PyPI:
```
pip install livermask
```
Alternatively, to install from source do:
```
pip install git+https://github.com/andreped/livermask.git
```
As TensorFlow 2.4 only supports Python 3.6-3.8, so does livermask. Software
is also compatible with Anaconda. However, best way of installing livermask is using `pip`, which
also works for conda environments.
(Optional) To add GPU inference support for liver vessel segmentation (which uses Chainer and CuPy), you need to install [CuPy](https://github.com/cupy/cupy). This can be easily done by adding `cupy-cudaX`, where `X` is the CUDA version you have installed, for instance `cupy-cuda110` for CUDA-11.0:
```
pip install cupy-cuda110
```
Program has been tested using Python 3.7 on Windows, macOS, and Ubuntu Linux 20.04.
## Usage
```
livermask --input path-to-input --output path-to-output
```
| command<img width=10/> | description |
| ------------------- | ------------- |
| `--input` | the full path to the input data. Could be nifti file or directory (if directory is provided as input) |
| `--output` | the full path to the output data. Could be either output name or directory (if directory is provided as input) |
| `--cpu` | to disable the GPU (force computations on CPU only) |
| `--verbose` | to enable verbose |
| `--vessels` | to segment vessels |
| `--extension` | which extension to save output in (default: `.nii`) |
<details open>
<summary>
### Using code directly</summary>
If you wish to use the code directly (not as a CLI and without installing), you can run this command:
```
python -m livermask.livermask --input path-to-input --output path-to-output
```
</details>
<details>
<summary>
### DICOM/NIfTI format</summary>
Pipeline assumes input is in the NIfTI format, and output a binary volume in the same format (.nii or .nii.gz).
DICOM can be converted to NIfTI using the CLI [dcm2niix](https://github.com/rordenlab/dcm2niix), as such:
```
dcm2niix -s y -m y -d 1 "path_to_CT_folder" "output_name"
```
Note that "-d 1" assumed that "path_to_CT_folder" is the folder just before the set of DICOM scans you want to import and convert. This can be removed if you want to convert multiple ones at the same time. It is possible to set "." for "output_name", which in theory should output a file with the same name as the DICOM folder, but that doesn't seem to happen...
</details>
<details>
<summary>
### Troubleshooting</summary>
You might have issues downloading the model when using VPN. If any issues are observed, try to disable VPN and try again.
If the program struggles to install, attempt to install using:
```
pip install --force-reinstall --no-deps git+https://github.com/andreped/livermask.git
```
If you experience issues with numpy after installing CuPy, try reinstalling CuPy with this extension:
```
pip install 'cupy-cuda110>=7.7.0,<8.0.0'
```
</details>
## Applications of livermask
* Wang et al., Machine learning-based radiomic models for predicting metachronous liver metastases in colorectal cancer patients: a multimodal study, Research Square (preprint), 2024, https://doi.org/10.21203/rs.3.rs-3320033/v1
* Yevdokimov et al., Recognition of Diffuse Hepatic Steatosis, 33rd Conference of Open Innovations Association, FRUCT, 2023, https://doi.org/10.23919/FRUCT58615.2023.10143062
* Pérez de Frutos et al., Learning deep abdominal CT registration through adaptive loss weighting and synthetic data generation, PLOS ONE, 2023,
https://doi.org/10.1371/journal.pone.0282110
* Lee et al., Robust End-to-End Focal Liver Lesion Detection Using Unregistered Multiphase Computed Tomography Images, IEEE Transactions on Emerging Topics in Computational Intelligence, 2021, https://doi.org/10.1109/TETCI.2021.3132382
* Survarachakan et al., Effects of Enhancement on Deep Learning Based Hepatic Vessel Segmentation, Electronics, 2021, https://doi.org/10.3390/electronics10101165
## Segmentation performance metrics
The segmentation models were evaluated on an internal dataset against manual annotations. See Table E in S4 Appendix in the Supporting Information of [this paper](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0282110) for more information. The table presented there can also be seen below:
| Class | DSC | HD95 |
|--------|-------------------|------------------|
| Parenchyma | 0.946±0.046 | 10.122±11.032 |
| Vessels | 0.355±0.090 | 24.872±5.161 |
The parenchyma segmentation model was trained on the LITS dataset, whereas the vessel model was trained on a local dataset (Oslo-CoMet). The LITS dataset is openly accessible and can be downloaded from [here](https://competitions.codalab.org/competitions/17094).
The Oslo-CoMet included 60 patients, of which 11 representative patients were used as hold out sample for the performance metrics assessment.
## Acknowledgements
If you found this tool helpful in your research, please, consider citing it (see [here](https://zenodo.org/badge/latestdoi/238680374) for more information on how to cite):
<pre>
@software{andre_pedersen_2023_7574587,
author = {André Pedersen and Javier Pérez de Frutos},
title = {andreped/livermask: v1.4.1},
month = jan,
year = 2023,
publisher = {Zenodo},
version = {v1.4.1},
doi = {10.5281/zenodo.7574587},
url = {https://doi.org/10.5281/zenodo.7574587}
}
</pre>
In addition, the segmentation performance of the tool was presented in [this paper](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0282110), thus, cite this tool as well if that is of relevance for you study:
<pre>
@article{perezdefrutos2022ddmr,
title = {Learning deep abdominal CT registration through adaptive loss weighting and synthetic data generation},
author = {Pérez de Frutos, Javier AND Pedersen, André AND Pelanis, Egidijus AND Bouget, David AND Survarachakan, Shanmugapriya AND Langø, Thomas AND Elle, Ole-Jakob AND Lindseth, Frank},
journal = {PLOS ONE},
publisher = {Public Library of Science},
year = {2023},
month = {02},
volume = {18},
doi = {10.1371/journal.pone.0282110},
url = {https://doi.org/10.1371/journal.pone.0282110},
pages = {1-14},
number = {2}
}
</pre>
|