NASNet: Image Classification
NASNet, introduced by Google in 2017, is an automated neural network architecture designed via Neural Architecture Search (NAS). It uses reinforcement learning to discover optimal building blocks (Cells) on CIFAR-10, then scales them for large-scale tasks like ImageNet classification. With fewer parameters (e.g., NASNet-A at 5.3M) than manual designs (e.g., ResNet), NASNet achieved state-of-the-art accuracy and computational efficiency. While its search process required massive GPU resources, NASNet demonstrated the viability of automated architecture design, inspiring EfficientNet and advancing AutoML. Its modular Cells were widely adapted for tasks like object detection, cementing NASNet’s role in efficient model development.
Source model
- Input shape: 1x224x224x3
- Number of parameters: 88.7M
- Model size: 338M
- Output shape: 1x1000
The source model can be found here
Performance Reference
Please search model by model name in Model Farm
Inference & Model Conversion
Please search model by model name in Model Farm
License
Source Model: APACHE-2.0
Deployable Model: APLUX-MODEL-FARM-LICENSE
- Downloads last month
- 4