The platform for on-device AI
Any model, any device, any runtime. Deploy within minutes.
Ecosystem
Create your on-device end-to-end ML solution with our ecosystem of model makers, cloud providers, runtime, and SDK partners
Bring your own model and data to Qualcomm AI Hub
Compile
Convert trained models and optimize for on‑device deployment, simply selecting a target device and runtime.
Profile
Submit a compiled model to run on a physical device. Dig into model performance including compute unit, latency, and memory metrics.
Deploy on device
Download compiled model, learn from our sample apps and bundle into your app.
Qualcomm AI Stack
Easily deploy optimized AI models on Qualcomm® devices to run on CPU, GPU, or NPU using TensorFlow Lite, ONNX Runtime, or Qualcomm® AI Engine Direct.