Update Readme.md
Browse files
README.md
CHANGED
@@ -2,6 +2,42 @@
|
|
2 |
license: apple-ascl
|
3 |
---
|
4 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
1. **CLIPImageModel to CoreML** [](https://colab.research.google.com/drive/1ZHMzsJyAukBa4Jryv4Tmc_BOBmbQAjxf?usp=sharing)
|
6 |
- This notebook demonstrates the process of converting a CLIP image model to CoreML format.
|
7 |
|
|
|
2 |
license: apple-ascl
|
3 |
---
|
4 |
|
5 |
+
## MobileCLIP CoreML Models
|
6 |
+
|
7 |
+
These are the CoreML models of MobileCLIP. For more details, refer to [MobileCLIP on HuggingFace](https://huggingface.co/apple/mobileclip_b_timm) and [MobileCLIP on GitHub](https://github.com/apple/ml-mobileclip).
|
8 |
+
|
9 |
+
The models are separated for each subarchitecture:
|
10 |
+
|
11 |
+
- **MobileCLIP-S0**: This subarchitecture is designed for lightweight and fast inference, making it suitable for edge devices with limited computational resources.
|
12 |
+
- **MobileCLIP-S1**: This subarchitecture offers a balance between model complexity and performance, providing a good trade-off for various applications.
|
13 |
+
- **MobileCLIP-S2**: This subarchitecture focuses on achieving higher accuracy, ideal for applications where performance can be slightly compromised for better results.
|
14 |
+
- **MobileCLIP-B**: This subarchitecture aims at delivering the highest possible accuracy, optimized for environments with ample computational resources.
|
15 |
+
|
16 |
+
Each subarchitecture contains a TextEncoder and ImageEncoder that are separated into CoreML models for each subarchitecture:
|
17 |
+
|
18 |
+
| Model | CLIP Text | CLIP Image |
|
19 |
+
|:----------------------------------------------------------|:-------------------------|:----------------------------|
|
20 |
+
| MobileCLIP-S0 | clip_text_s0.mlpackage | clip_image_s0.mlpackage |
|
21 |
+
| MobileCLIP-S1 | clip_text_s1.mlpackage | clip_image_s1.mlpackage |
|
22 |
+
| MobileCLIP-S2 | clip_text_s2.mlpackage | clip_image_s2.mlpackage |
|
23 |
+
| MobileCLIP-B | clip_text_B.mlpackage | clip_image_B.mlpackage |
|
24 |
+
|
25 |
+
For detailed implementation and architecture specifics, refer to the [MobileCLIP GitHub repository](https://github.com/apple/ml-mobileclip).
|
26 |
+
|
27 |
+
**CoreML Parameters:**
|
28 |
+
|
29 |
+
|
30 |
+
| Model | Input Name | Input Shape | Input DataType | Output Name | Output Shape | Output DataType |
|
31 |
+
|:---------|:-------------|:------------|:---------------|:-------------------|:-------------|:----------------|
|
32 |
+
| CLIP Text| input_text | (1,77) | INT32 | output_embeddings | (1,512) | FLOAT16 |
|
33 |
+
|
34 |
+
| Model | Input Name | Input Width | Input Height | Input ColorSpace | Output Name | Output Shape | Output DataType |
|
35 |
+
|:---------|:-------------|:------------|:-------------|:-----------------|:-------------------|:-------------|:----------------|
|
36 |
+
| CLIP Image| input_image | 256 | 256 | RGB | output_embeddings | (1,512) | FLOAT16 |
|
37 |
+
|
38 |
+
|
39 |
+
*These are example scripts for performing the conversion to CoreML*
|
40 |
+
|
41 |
1. **CLIPImageModel to CoreML** [](https://colab.research.google.com/drive/1ZHMzsJyAukBa4Jryv4Tmc_BOBmbQAjxf?usp=sharing)
|
42 |
- This notebook demonstrates the process of converting a CLIP image model to CoreML format.
|
43 |
|