Mobile
Enabling intelligent connections and personalized applications across devices
Sample Ready Apps
Super Resolution
Sample application to deploy an optimized super resolution solution on device
Image Classification
Sample application to deploy an optimized image classification solution on device
Semantic Segmentation
Sample application to deploy an optimized semantic segmentation solution on device
Deploy optimized models on real devices in minutes
Qualcomm® AI Hub simplifies deploying AI models for vision, audio, and speech applications to edge devices within minutes. This example shows how you can deploy your own PyTorch model on a real hosted device. See the documentation for more details. If you hit any issues with your model (performance, accuracy or otherwise), please file an issue here.
Filter by
Domain/Use Case
Device
Chipset
Model Precision
Tags
- A “backbone” model is designed to extract task-agnostic representations from specific data modalities (e.g., images, text, speech). This representation can then be fine-tuned for specialized tasks.
- A “foundation” model is versatile and designed for multi-task capabilities, without the need for fine-tuning.
- Models capable of generating text, images, or other data using generative models, often in response to prompts.
- Large language models. Useful for a variety of tasks including language generation, optical character recognition, information retrieval, and more.
- A “quantized” model can run in low or mixed precision, which can substantially reduce inference latency.
- A “real-time” model can typically achieve 5-60 predictions per second. This translates to latency ranging up to 200 ms per prediction.
137 models