Qualcomm® AI HubAI Hub
HomeCompute ModelsConvNext-Tiny-w8a8-Quantized

ConvNext-Tiny-w8a8-Quantized

Imagenet classifier and general purpose backbone.

ConvNextTiny is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.

Snapdragon® X Elite
TorchScripttoQualcomm® AI Engine Direct
1.92ms
Inference Time
0MB
Memory Usage
215NPU
Layers

Technical Details

Model checkpoint:Imagenet
Input resolution:224x224
Number of parameters:28.6M
Model size:28 MB
Precision:w8a8 (8-bit weights, 8-bit activations)

Applicable Scenarios

  • Medical Imaging
  • Anomaly Detection
  • Inventory Management

Licenses

Source Model:BSD-3-CLAUSE
Deployable Model:AI Model Hub License

Tags

  • quantized
    A “quantized” model can run in low or mixed precision, which can substantially reduce inference latency.

Supported Compute Chipsets

  • Snapdragon® X Elite