Update README.md
Browse files
README.md
CHANGED
@@ -15,21 +15,62 @@ datasets:
|
|
15 |
- medical
|
16 |
---
|
17 |
|
18 |
-
# MedSAM2:
|
19 |
|
20 |
<div align="center">
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
|
|
|
|
|
|
31 |
</div>
|
32 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
## Model Overview
|
34 |
MedSAM2 is a promptable segmentation segmentation model tailored for medical imaging applications. Built upon the foundation of the [Segment Anything Model (SAM) 2.1](https://github.com/facebookresearch/sam2), MedSAM2 has been specifically adapted and fine-tuned for various 3D medical images and videos.
|
35 |
|
|
|
15 |
- medical
|
16 |
---
|
17 |
|
18 |
+
# MedSAM2: Segment Anything in 3D Medical Images and Videos
|
19 |
|
20 |
<div align="center">
|
21 |
+
<table align="center">
|
22 |
+
<tr>
|
23 |
+
<td><a href="https://github.com/bowang-lab/MedSAM2/blob/main/tbd" target="_blank"><img src="https://img.shields.io/badge/Paper-blue?style=for-the-badge" alt="Paper"></a></td>
|
24 |
+
<td><a href="https://huggingface.co/wanglab/MedSAM2" target="_blank"><img src="https://img.shields.io/badge/HuggingFace-FFD21E?style=for-the-badge&logoColor=FF9D00" alt="HuggingFace"></a></td>
|
25 |
+
<td><a href="https://medsam-datasetlist.github.io/" target="_blank"><img src="https://img.shields.io/badge/Dataset%20List-4A90E2?style=for-the-badge&logoColor=white" alt="Dataset List"></a></td>
|
26 |
+
<td><a href="https://huggingface.co/datasets/wanglab/CT_DeepLesion-MedSAM2" target="_blank"><img src="https://img.shields.io/badge/CT__DeepLesion--MedSAM2-green?style=for-the-badge" alt="CT_DeepLesion-MedSAM2"></a></td>
|
27 |
+
<td><a href="https://huggingface.co/datasets/wanglab/LLD-MMRI-MedSAM2" target="_blank"><img src="https://img.shields.io/badge/LLD--MMRI--MedSAM2-orange?style=for-the-badge" alt="LLD-MMRI-MedSAM2"></a></td>
|
28 |
+
<td><a href="https://github.com/bowang-lab/MedSAMSlicer/tree/MedSAM2" target="_blank"><img src="https://img.shields.io/badge/3D_Slicer-black?style=for-the-badge&logo=3DSlicer&logoColor=white" alt="3D Slicer"></a></td>
|
29 |
+
<td><a href="https://github.com/bowang-lab/MedSAM2/blob/main/app.py" target="_blank"><img src="https://img.shields.io/badge/Gradio_App-yellow?style=for-the-badge&logoColor=white" alt="Gradio App"></a></td>
|
30 |
+
<td><a href="https://colab.research.google.com/drive/1MKna9Sg9c78LNcrVyG58cQQmaePZq2k2?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/CoLab-4A90E2?style=for-the-badge&logo=CoLab&logoColor=white" alt="Colab"></a></td>
|
31 |
+
<td><a href="https://github.com/bowang-lab/MedSAM2#citing-medsam2" target="_blank"><img src="https://img.shields.io/badge/BibTeX-4A90E2?style=for-the-badge&logo=BibTeX&logoColor=white" alt="BibTeX"></a></td>
|
32 |
+
</tr>
|
33 |
+
</table>
|
34 |
</div>
|
35 |
|
36 |
+
|
37 |
+
## Authors
|
38 |
+
|
39 |
+
<p align="center">
|
40 |
+
<a href="https://scholar.google.com.hk/citations?hl=en&user=bW1UV4IAAAAJ&view_op=list_works&sortby=pubdate">Jun Ma</a><sup>* 1,2</sup>,
|
41 |
+
<a href="https://scholar.google.com/citations?user=8IE0CfwAAAAJ&hl=en">Zongxin Yang</a><sup>* 3</sup>,
|
42 |
+
Sumin Kim<sup>2,4,5</sup>,
|
43 |
+
Bihui Chen<sup>2,4,5</sup>,
|
44 |
+
<a href="https://scholar.google.com.hk/citations?user=U-LgNOwAAAAJ&hl=en&oi=sra">Mohammed Baharoon</a><sup>2,3,5</sup>,
|
45 |
+
<a href="https://scholar.google.com.hk/citations?user=4qvKTooAAAAJ&hl=en&oi=sra">Adibvafa Fallahpour</a><sup>2,4,5</sup>,
|
46 |
+
<a href="https://scholar.google.com.hk/citations?user=UlTJ-pAAAAAJ&hl=en&oi=sra">Reza Asakereh</a><sup>4,7</sup>,
|
47 |
+
Hongwei Lyu<sup>4</sup>,
|
48 |
+
<a href="https://wanglab.ai/index.html">Bo Wang</a><sup>† 1,2,4,5,6</sup>
|
49 |
+
</p>
|
50 |
+
|
51 |
+
<p align="center">
|
52 |
+
<sup>*</sup> Equal contribution <sup>†</sup> Corresponding author
|
53 |
+
</p>
|
54 |
+
|
55 |
+
<p align="center">
|
56 |
+
<sup>1</sup>AI Collaborative Centre, University Health Network, Toronto, Canada<br>
|
57 |
+
<sup>2</sup>Vector Institute for Artificial Intelligence, Toronto, Canada<br>
|
58 |
+
<sup>3</sup>Department of Biomedical Informatics, Harvard Medical School, Harvard University, Boston, USA<br>
|
59 |
+
<sup>4</sup>Peter Munk Cardiac Centre, University Health Network, Toronto, Canada<br>
|
60 |
+
<sup>5</sup>Department of Computer Science, University of Toronto, Toronto, Canada<br>
|
61 |
+
<sup>6</sup>Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Canada<br>
|
62 |
+
<sup>7</sup>Roche Canada and Genentech
|
63 |
+
</p>
|
64 |
+
|
65 |
+
|
66 |
+
## Highlights
|
67 |
+
|
68 |
+
- A promptable foundation model for 3D medical image and video segmentation
|
69 |
+
- Trained on 455,000+ 3D image-mask pairs and 76,000+ annotated video frames
|
70 |
+
- Versatile segmentation capability across diverse organs and pathologies
|
71 |
+
- Extensive user studies in large-scale lesion and video datasets demonstrate that MedSAM2 substantially facilitates annotation workflows
|
72 |
+
|
73 |
+
|
74 |
## Model Overview
|
75 |
MedSAM2 is a promptable segmentation segmentation model tailored for medical imaging applications. Built upon the foundation of the [Segment Anything Model (SAM) 2.1](https://github.com/facebookresearch/sam2), MedSAM2 has been specifically adapted and fine-tuned for various 3D medical images and videos.
|
76 |
|