Qualcomm® AI HubAI Hub

LiteRT

Google’s high-performance runtime for on-device AI (formerly TensorFlow Lite)

Cross‑platform on‑device ML

Leverage LiteRT and Qualcomm AI Hub to run powerful machine learning models across devices, optimized for on‑device machine learning with multi‑platform support.

Submit on AI Hub

To use LiteRT when submitting your compile job to Qualcomm AI Hub, please specify ‑‑target_runtime tflite.

Bringing ML‑powered experiences to over 100K apps running on 2.7B devices

Visit Website