SP85M

ViT-base (85M parameters) trained on 423,000 H&E slides from the Mount Sinai Health System.

Model Usage

To get started, first clone the repository with this command:

  git clone --no-checkout https://huggingface.co/MountSinaiCompPath/SP85M && cd SP85M && git sparse-checkout init --no-cone && git sparse-checkout set '/*' '!*.bin' && git checkout

Now you can use the following code:

from PIL import Image
import numpy as np
import vision_transformer
import torch
import torch.nn as nn
import torchvision.transforms as transforms
from huggingface_hub import PyTorchModelHubMixin

class SP85M(nn.Module, PyTorchModelHubMixin):
    def __init__(self):
        super().__init__()
        self.encoder = vision_transformer.vit_small(num_classes=0)
    
    def forward(self, x):
        return self.encoder(x)

# Download up model
model = SP85M.from_pretrained("MountSinaiCompPath/SP85M")

# Set up transform
transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))
])

# Image
img = np.random.randint(0, 256, size=224*224*3).reshape(224,224,3).astype(np.uint8)
img = Image.fromarray(img)
img = transform(img).unsqueeze(0)

# Inference
with torch.no_grad():
    h = model(img)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.