SAM2-Unet-tiny: Semantic Segmentation

SAM2-Unet is a hybrid segmentation model integrating the Segment Anything Model (SAM) with U-Net, optimized for medical image segmentation and few-shot learning. It incorporates SAM's visual prompt mechanism into U-Net's encoder-decoder structure, enabling dynamic target guidance via interactive point/box inputs while retaining skip connections for multi-scale feature fusion. Lightweight adapters fine-tune SAM's pretrained weights to enhance sensitivity to low-contrast regions in medical images (e.g., CT/MRI) and reduce reliance on large annotated datasets. Supporting zero-shot transfer and few-shot tuning, it improves Dice scores by ~8% over traditional U-Net on BraTS and ISIC benchmarks with low computational overhead, ideal for clinical diagnostics and real-time lesion localization.

Source model

  • Input shape: 1x3x352x352
  • Number of parameters: 28.38M
  • Model size: 119.42M
  • Output shape: 1x1x352x352

The source model can be found here

Performance Reference

Please search model by model name in Model Farm

Inference & Model Conversion

Please search model by model name in Model Farm

License

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support