IoT
Deploy real-time AI to various devices providing next-generation user experiences
Deploy with Foundries.io
Deploy optimized models on real devices in minutes
Qualcomm® AI Hub simplifies deploying AI models for vision, audio, and speech applications to edge devices within minutes. This example shows how you can deploy your own PyTorch model on a real hosted device. See the documentation for more details. If you hit any issues with your model (performance, accuracy or otherwise), please file an issue here.
Filter by
Chipset
Domain/Use Case
Model Precision
IM SDK Support
Tags
- A “backbone” model is designed to extract task-agnostic representations from specific data modalities (e.g., images, text, speech). This representation can then be fine-tuned for specialized tasks.
- A “foundation” model is versatile and designed for multi-task capabilities, without the need for fine-tuning.
- Models capable of generating text, images, or other data using generative models, often in response to prompts.
- A “quantized” model can run in low or mixed precision, which can substantially reduce inference latency.
- A “real-time” model can typically achieve 5-60 predictions per second. This translates to latency ranging up to 200 ms per prediction.
131 models