MedSAM2 / README.md
adibvafa's picture
Update README.md
2c4089b verified
---
datasets:
- medical
language: en
library_name: torch
license: cc-by-sa-4.0
pipeline_tag: image-segmentation
tags:
- medical
- segmentation
- sam
- medical-imaging
- ct
- mri
- ultrasound
---
# MedSAM2: Segment Anything in 3D Medical Images and Videos
<div align="center">
<table align="center">
<tr>
<td><a href="https://arxiv.org/abs/2504.03600" target="_blank"><img src="https://img.shields.io/badge/arXiv-Paper-FF6B6B?style=for-the-badge&logo=arxiv&logoColor=white" alt="Paper"></a></td>
<td><a href="https://medsam2.github.io/" target="_blank"><img src="https://img.shields.io/badge/Project-Page-4285F4?style=for-the-badge&logoColor=white" alt="Project"></a></td>
<td><a href="https://github.com/bowang-lab/MedSAM2" target="_blank"><img src="https://img.shields.io/badge/GitHub-Code-181717?style=for-the-badge&logo=github&logoColor=white" alt="Code"></a></td>
<td><a href="https://huggingface.co/wanglab/MedSAM2" target="_blank"><img src="https://img.shields.io/badge/HuggingFace-Model-FFBF00?style=for-the-badge&logo=huggingface&logoColor=white" alt="HuggingFace Model"></a></td>
</tr>
<tr>
<td><a href="https://medsam-datasetlist.github.io/" target="_blank"><img src="https://img.shields.io/badge/Dataset-List-00B89E?style=for-the-badge" alt="Dataset List"></a></td>
<td><a href="https://huggingface.co/datasets/wanglab/CT_DeepLesion-MedSAM2" target="_blank"><img src="https://img.shields.io/badge/Dataset-CT__DeepLesion-28A745?style=for-the-badge" alt="CT_DeepLesion-MedSAM2"></a></td>
<td><a href="https://huggingface.co/datasets/wanglab/LLD-MMRI-MedSAM2" target="_blank"><img src="https://img.shields.io/badge/Dataset-LLD--MMRI-FF6B6B?style=for-the-badge" alt="LLD-MMRI-MedSAM2"></a></td>
<td><a href="https://github.com/bowang-lab/MedSAMSlicer/tree/MedSAM2" target="_blank"><img src="https://img.shields.io/badge/3D_Slicer-Plugin-e2006a?style=for-the-badge" alt="3D Slicer"></a></td>
</tr>
<tr>
<td><a href="https://github.com/bowang-lab/MedSAM2/blob/main/app.py" target="_blank"><img src="https://img.shields.io/badge/Gradio-Demo-F9D371?style=for-the-badge&logo=gradio&logoColor=white" alt="Gradio App"></a></td>
<td><a href="https://colab.research.google.com/drive/1MKna9Sg9c78LNcrVyG58cQQmaePZq2k2?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/Colab-CT--Seg--Demo-F9AB00?style=for-the-badge&logo=googlecolab&logoColor=white" alt="CT-Seg-Demo"></a></td>
<td><a href="https://colab.research.google.com/drive/16niRHqdDZMCGV7lKuagNq_r_CEHtKY1f?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/Colab-Video--Seg--Demo-F9AB00?style=for-the-badge&logo=googlecolab&logoColor=white" alt="Video-Seg-Demo"></a></td>
<td><a href="https://github.com/bowang-lab/MedSAM2?tab=readme-ov-file#bibtex" target="_blank"><img src="https://img.shields.io/badge/Paper-BibTeX-9370DB?style=for-the-badge&logoColor=white" alt="BibTeX"></a></td>
</tr>
</table>
</div>
## Authors
<p align="center">
<a href="https://scholar.google.com.hk/citations?hl=en&user=bW1UV4IAAAAJ&view_op=list_works&sortby=pubdate">Jun Ma</a><sup>* 1,2</sup>,
<a href="https://scholar.google.com/citations?user=8IE0CfwAAAAJ&hl=en">Zongxin Yang</a><sup>* 3</sup>,
Sumin Kim<sup>2,4,5</sup>,
Bihui Chen<sup>2,4,5</sup>,
<a href="https://scholar.google.com.hk/citations?user=U-LgNOwAAAAJ&hl=en&oi=sra">Mohammed Baharoon</a><sup>2,3,5</sup>,<br>
<a href="https://scholar.google.com.hk/citations?user=4qvKTooAAAAJ&hl=en&oi=sra">Adibvafa Fallahpour</a><sup>2,4,5</sup>,
<a href="https://scholar.google.com.hk/citations?user=UlTJ-pAAAAAJ&hl=en&oi=sra">Reza Asakereh</a><sup>4,7</sup>,
Hongwei Lyu<sup>4</sup>,
<a href="https://wanglab.ai/index.html">Bo Wang</a><sup>† 1,2,4,5,6</sup>
</p>
<p align="center">
<sup>*</sup> Equal contribution &nbsp;&nbsp;&nbsp; <sup></sup> Corresponding author
</p>
<p align="center">
<sup>1</sup>AI Collaborative Centre, University Health Network, Toronto, Canada<br>
<sup>2</sup>Vector Institute for Artificial Intelligence, Toronto, Canada<br>
<sup>3</sup>Department of Biomedical Informatics, Harvard Medical School, Harvard University, Boston, USA<br>
<sup>4</sup>Peter Munk Cardiac Centre, University Health Network, Toronto, Canada<br>
<sup>5</sup>Department of Computer Science, University of Toronto, Toronto, Canada<br>
<sup>6</sup>Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Canada<br>
<sup>7</sup>Roche Canada and Genentech
</p>
## Highlights
- A promptable foundation model for 3D medical image and video segmentation
- Trained on 455,000+ 3D image-mask pairs and 76,000+ annotated video frames
- Versatile segmentation capability across diverse organs and pathologies
- Extensive user studies in large-scale lesion and video datasets demonstrate that MedSAM2 substantially facilitates annotation workflows
## Model Overview
MedSAM2 is a promptable segmentation segmentation model tailored for medical imaging applications. Built upon the foundation of the [Segment Anything Model (SAM) 2.1](https://github.com/facebookresearch/sam2), MedSAM2 has been specifically adapted and fine-tuned for various 3D medical images and videos.
## Available Models
- **MedSAM2_2411.pt**: Base model trained in November 2024
- **MedSAM2_US_Heart.pt**: Fine-tuned model specialized for heart ultrasound video segmentation
- **MedSAM2_MRI_LiverLesion.pt**: Fine-tuned model for liver lesion segmentation in MRI scans
- **MedSAM2_CTLesion.pt**: Fine-tuned model for general lesion segmentation in CT scans
- **MedSAM2_latest.pt** (recommended): Latest version trained on the combination of public datasets and newly annotated medical imaging data
## Downloading Models
### Option 1: Download individual models
You can download the models directly from the Hugging Face repository:
```python
# Using huggingface_hub
from huggingface_hub import hf_hub_download
# Download the recommended latest model
model_path = hf_hub_download(repo_id="wanglab/MedSAM2", filename="MedSAM2_latest.pt")
# Or download a specific fine-tuned model
heart_us_model_path = hf_hub_download(repo_id="wanglab/MedSAM2", filename="MedSAM2_US_Heart.pt")
liver_model_path = hf_hub_download(repo_id="wanglab/MedSAM2", filename="MedSAM2_MRI_LiverLesion.pt")
```
### Option 2: Download all models to a specific folder
```python
from huggingface_hub import hf_hub_download
import os
# Create checkpoints directory if it doesn't exist
os.makedirs("checkpoints", exist_ok=True)
# List of model filenames
model_files = [
"MedSAM2_2411.pt",
"MedSAM2_US_Heart.pt",
"MedSAM2_MRI_LiverLesion.pt",
"MedSAM2_CTLesion.pt",
"MedSAM2_latest.pt"
]
# Download all models
for model_file in model_files:
local_path = os.path.join("checkpoints", model_file)
hf_hub_download(
repo_id="wanglab/MedSAM2",
filename=model_file,
local_dir="checkpoints",
local_dir_use_symlinks=False
)
print(f"Downloaded {model_file} to {local_path}")
```
Alternatively, you can manually download the models from the [Hugging Face repository page](https://huggingface.co/wanglab/MedSAM2).
## Citations
```
@article{MedSAM2,
title={MedSAM2: Segment Anything in 3D Medical Images and Videos},
author={Ma, Jun and Yang, Zongxin and Kim, Sumin and Chen, Bihui and Baharoon, Mohammed and Fallahpour, Adibvafa and Asakereh, Reza and Lyu, Hongwei and Wang, Bo},
journal={arXiv preprint arXiv:2504.03600},
year={2025}
}
```
## License
The model weights can only be used for research and education purposes.