mlxim
is an image models library built on Apple MLX. It tries to replicate the great timm library from Ross Wightman, but for MLX models.
You can find mlxim
models by filtering using the mlxim
library name, like in this query.
There’s also an open mlx-vision space for contributors converting and publishing weights for MLX format.
Thanks to MLX Hugging Face Hub integration, you can load MLX models with a few lines of code.
pip install mlx-image
Model weights are available on the mlx-vision
community on HuggingFace.
To load a model with pre-trained weights:
from mlxim.model import create_model
# loading weights from HuggingFace (https://huggingface.co/mlx-vision/resnet18-mlxim)
model = create_model("resnet18") # pretrained weights loaded from HF
# loading weights from local file
model = create_model("resnet18", weights="path/to/resnet18/model.safetensors")
To list all available models:
from mlxim.model import list_models
list_models()
As of today (2024-03-08) mlx does not support group
param for nn.Conv2d. Therefore, architectures such as resnext
, regnet
or efficientnet
are not yet supported in mlxim
.
Go to results-imagenet-1k.csv to check every model converted to mlxim
and its performance on ImageNet-1K with different settings.
TL;DR performance is comparable to the original models from PyTorch implementations.
mlxim
tries to be as close as possible to PyTorch:
DataLoader
-> you can define your own collate_fn
and also use num_workers
to speed up data loadingDataset
-> mlxim
already supports LabelFolderDataset
(the good and old PyTorch ImageFolder
) and FolderDataset
(a generic folder with images in it)ModelCheckpoint
-> keeps track of the best model and saves it to disk (similar to PyTorchLightning). It also suggests early stoppingTraining is similar to PyTorch. Here’s an example of how to train a model:
import mlx.nn as nn
import mlx.optimizers as optim
from mlxim.model import create_model
from mlxim.data import LabelFolderDataset, DataLoader
train_dataset = LabelFolderDataset(
root_dir="path/to/train",
class_map={0: "class_0", 1: "class_1", 2: ["class_2", "class_3"]}
)
train_loader = DataLoader(
dataset=train_dataset,
batch_size=32,
shuffle=True,
num_workers=4
)
model = create_model("resnet18") # pretrained weights loaded from HF
optimizer = optim.Adam(learning_rate=1e-3)
def train_step(model, inputs, targets):
logits = model(inputs)
loss = mx.mean(nn.losses.cross_entropy(logits, target))
return loss
model.train()
for epoch in range(10):
for batch in train_loader:
x, target = batch
train_step_fn = nn.value_and_grad(model, train_step)
loss, grads = train_step_fn(x, target)
optimizer.update(model, grads)
mx.eval(model.state, optimizer.state)