All Models
Optimized and validated by Qualcomm
Deploy optimized models on real devices in minutes
Qualcomm® AI Hub simplifies deploying AI models for vision, audio, and speech applications to edge devices within minutes. This example shows how you can deploy your own PyTorch model on a real hosted device. See the documentation for more details. If you hit any issues with your model (performance, accuracy or otherwise), please file an issue here.
Filter by
Domain/Use Case
Device
Chipset
Model Precision
IM SDK Support
Tags
- A “backbone” model is designed to extract task-agnostic representations from specific data modalities (e.g., images, text, speech). This representation can then be fine-tuned for specialized tasks.
- A “foundation” model is versatile and designed for multi-task capabilities, without the need for fine-tuning.
- Models capable of generating text, images, or other data using generative models, often in response to prompts.
- Large language models. Useful for a variety of tasks including language generation, optical character recognition, information retrieval, and more.
- A “quantized” model can run in low or mixed precision, which can substantially reduce inference latency.
- A “real-time” model can typically achieve 5-60 predictions per second. This translates to latency ranging up to 200 ms per prediction.
108 models
- View details for the AOT-GAN model.
- View details for the Baichuan-7B model.
- View details for the ControlNet model.
- View details for the ConvNext-Tiny model.
- View details for the ConvNext-Tiny-w8a8-Quantized model.
- View details for the ConvNext-Tiny-w8a16-Quantized model.
- View details for the DDRNet23-Slim model.
- View details for the DeepLabV3-Plus-MobileNet model.
- View details for the DeepLabV3-Plus-MobileNet-Quantized model.
- View details for the DeepLabV3-ResNet50 model.
- View details for the DenseNet-121 model.
- View details for the DETR-ResNet50 model.