library_name: transformers tags:
- medical-imaging
- image-segmentation
- ultrasound
- foundation-models
- sam
Model Card for Sam2Rad
Sam2Rad is a prompt-learning framework that adapts Segment Anything Model (SAM/SAM2) for autonomous segmentation of bony structures in ultrasound images. It eliminates the need for manual prompts through a lightweight Prompt Predictor Network (PPN) that generates learnable prompts directly from image features. Compatible with all SAM variants, it supports three modes: autonomous operation, semi-autonomous human-in-the-loop refinement, and fully manual prompting.
Model Details
Model Description
- Developed by: Assefa Seyoum Wahd, Banafshe Felfeliyan, Yuyue Zhou, et al. (University of Alberta and McGill University)
- Funded by [optional]: TD Ready Grant, IC-IMPACTS, One Child Every Child, Arthritis Society, Alberta Innovates AICE Concepts
- Shared by: Ayyuce Demirbas
- Model type: Vision Transformer (ViT)-based segmentation model with prompt learning
- Language(s) (NLP): N/A (Image-based model)
- License: [More Information Needed] (Check GitHub for exact license)
- Finetuned from model [optional]: SAM/SAM2 (Hiera-Tiny, Hiera-Small, Hiera-Large, Hiera-Base+)
Model Sources [optional]
- Repository: GitHub
- Paper: "Sam2Rad: A Segmentation Model for Medical Images with Learnable Prompts"
- Demo [optional]: [More Information Needed]
Uses
Direct Use
- Automatic segmentation of bones in musculoskeletal ultrasound images (hip, wrist, shoulder)
- Integration into clinical workflows for real-time analysis or data labeling
Downstream Use [optional]
- Active learning frameworks requiring rapid annotation
- Multi-class medical image segmentation with task-specific adaptations
Out-of-Scope Use
- Non-ultrasound modalities (e.g., MRI, CT) without retraining
- Images with severe artifacts or non-anatomical structures
- Non-medical image segmentation
Bias, Risks, and Limitations
- Domain specificity: Trained on musculoskeletal ultrasound; performance degrades on unseen modalities.
- Anatomical limitations: May struggle with atypical anatomies or surgical implants.
- Ethical considerations: Not validated for diagnostic use without clinician oversight.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. Validate outputs against expert annotations in clinical deployments. Retrain PPN when applying to new anatomical regions or imaging protocols.
How to Get Started with the Model
Use the code below to get started with the model.
# see GitHub for implementation https://github.com/aswahd/SamRadiology
from transformers import AutoModel
model = AutoModel.from_pretrained("ayyuce/sam2rad")
- Downloads last month
- 14
Model tree for ayyuce/sam2rad
Base model
facebook/sam2-hiera-tiny