Qualcomm® AI HubAI Hub

The platform for on-device AI

Any model, any device, any runtime. Deploy within minutes.

Ecosystem

Create your on-device end-to-end ML solution with our ecosystem of model makers, cloud providers, runtime, and SDK partners

New Device

Run your models on the Snapdragon® 8 Elite on AI Hub. You ask when? We say NOW!

Model Maker

Explore models from Mistral on Qualcomm AI Hub

Model Maker

Check out Tech Mahindra’s IndusQ 1.1B

Model Maker

Learn more about PLaMo 1B by Preferred Networks

Model Maker

Integrate Granite‑3B‑Code‑Instruct into your applications

Model Maker

G42’s Jais 6.7B, available now

Model Maker

Try out Llama3.2, the open‑source AI model you can fine‑tune, distill and deploy anywhere

ML Service

Train, fine‑tune, and deploy models on edge devices using Amazon SageMaker and Qualcomm AI Hub

ML Service

Try out Dataloop’s automated pipeline for data curation. Bring your trained model to Qualcomm AI Hub.

ML Service

Create AI applications with hardware aware AI model optimizations

ML Service

Build your on‑device computer vision workflows and optimize with Qualcomm AI Hub

Runtime

Submit your model to Qualcomm AI Hub using LiteRT

Runtime

Submit your model to Qualcomm AI Hub using ONNX Runtime

SDK

Argmax has launched their WhisperKit SDK on Qualcomm devices

Bring your own model and data to Qualcomm AI Hub

Models for all solutions

Mobile

Enabling intelligent connections and personalized applications across devices

A mobile phone

Compute

Endless possibilities on a powerful device, built for AI

A laptop computer

Automotive

Unlocking a new era of mobility

A car

IoT

Deploy real-time AI to various devices providing next-generation user experiences

A drone
Explore All Models

Qualcomm AI Stack

Easily deploy optimized AI models on Qualcomm® devices to run on CPU, GPU, or NPU using TensorFlow Lite, ONNX Runtime, or Qualcomm® AI Engine Direct.