Profile Job Results

Jobs
jgo19qv45
Results Ready
Name
whisper_medium_decoder
Target Device
  • SA8295P ADP
  • Android 14
  • Qualcomm® SA8295P
Creator
ai-hub-support@qti.qualcomm.com
Target Model
Input Specs
input_ids: int32[1, 1]
position_ids: int32[1]
k_cache_self_0_in: float16[16, 1, 64, 199]
v_cache_self_0_in: float16[16, 1, 199, 64]
attention_mask: float16[1, 1, 1, 200]
k_cache_cross_0: float16[16, 1, 64, 1500]
v_cache_cross_0: float16[16, 1, 1500, 64]
k_cache_self_1_in: float16[16, 1, 64, 199]
v_cache_self_1_in: float16[16, 1, 199, 64]
k_cache_cross_1: float16[16, 1, 64, 1500]
v_cache_cross_1: float16[16, 1, 1500, 64]
k_cache_self_2_in: float16[16, 1, 64, 199]
v_cache_self_2_in: float16[16, 1, 199, 64]
k_cache_cross_2: float16[16, 1, 64, 1500]
v_cache_cross_2: float16[16, 1, 1500, 64]
k_cache_self_3_in: float16[16, 1, 64, 199]
v_cache_self_3_in: float16[16, 1, 199, 64]
k_cache_cross_3: float16[16, 1, 64, 1500]
v_cache_cross_3: float16[16, 1, 1500, 64]
k_cache_self_4_in: float16[16, 1, 64, 199]
v_cache_self_4_in: float16[16, 1, 199, 64]
k_cache_cross_4: float16[16, 1, 64, 1500]
v_cache_cross_4: float16[16, 1, 1500, 64]
k_cache_self_5_in: float16[16, 1, 64, 199]
v_cache_self_5_in: float16[16, 1, 199, 64]
k_cache_cross_5: float16[16, 1, 64, 1500]
v_cache_cross_5: float16[16, 1, 1500, 64]
k_cache_self_6_in: float16[16, 1, 64, 199]
v_cache_self_6_in: float16[16, 1, 199, 64]
k_cache_cross_6: float16[16, 1, 64, 1500]
v_cache_cross_6: float16[16, 1, 1500, 64]
k_cache_self_7_in: float16[16, 1, 64, 199]
v_cache_self_7_in: float16[16, 1, 199, 64]
k_cache_cross_7: float16[16, 1, 64, 1500]
v_cache_cross_7: float16[16, 1, 1500, 64]
k_cache_self_8_in: float16[16, 1, 64, 199]
v_cache_self_8_in: float16[16, 1, 199, 64]
k_cache_cross_8: float16[16, 1, 64, 1500]
v_cache_cross_8: float16[16, 1, 1500, 64]
k_cache_self_9_in: float16[16, 1, 64, 199]
v_cache_self_9_in: float16[16, 1, 199, 64]
k_cache_cross_9: float16[16, 1, 64, 1500]
v_cache_cross_9: float16[16, 1, 1500, 64]
k_cache_self_10_in: float16[16, 1, 64, 199]
v_cache_self_10_in: float16[16, 1, 199, 64]
k_cache_cross_10: float16[16, 1, 64, 1500]
v_cache_cross_10: float16[16, 1, 1500, 64]
k_cache_self_11_in: float16[16, 1, 64, 199]
v_cache_self_11_in: float16[16, 1, 199, 64]
k_cache_cross_11: float16[16, 1, 64, 1500]
v_cache_cross_11: float16[16, 1, 1500, 64]
k_cache_self_12_in: float16[16, 1, 64, 199]
v_cache_self_12_in: float16[16, 1, 199, 64]
k_cache_cross_12: float16[16, 1, 64, 1500]
v_cache_cross_12: float16[16, 1, 1500, 64]
k_cache_self_13_in: float16[16, 1, 64, 199]
v_cache_self_13_in: float16[16, 1, 199, 64]
k_cache_cross_13: float16[16, 1, 64, 1500]
v_cache_cross_13: float16[16, 1, 1500, 64]
k_cache_self_14_in: float16[16, 1, 64, 199]
v_cache_self_14_in: float16[16, 1, 199, 64]
k_cache_cross_14: float16[16, 1, 64, 1500]
v_cache_cross_14: float16[16, 1, 1500, 64]
k_cache_self_15_in: float16[16, 1, 64, 199]
v_cache_self_15_in: float16[16, 1, 199, 64]
k_cache_cross_15: float16[16, 1, 64, 1500]
v_cache_cross_15: float16[16, 1, 1500, 64]
k_cache_self_16_in: float16[16, 1, 64, 199]
v_cache_self_16_in: float16[16, 1, 199, 64]
k_cache_cross_16: float16[16, 1, 64, 1500]
v_cache_cross_16: float16[16, 1, 1500, 64]
k_cache_self_17_in: float16[16, 1, 64, 199]
v_cache_self_17_in: float16[16, 1, 199, 64]
k_cache_cross_17: float16[16, 1, 64, 1500]
v_cache_cross_17: float16[16, 1, 1500, 64]
k_cache_self_18_in: float16[16, 1, 64, 199]
v_cache_self_18_in: float16[16, 1, 199, 64]
k_cache_cross_18: float16[16, 1, 64, 1500]
v_cache_cross_18: float16[16, 1, 1500, 64]
k_cache_self_19_in: float16[16, 1, 64, 199]
v_cache_self_19_in: float16[16, 1, 199, 64]
k_cache_cross_19: float16[16, 1, 64, 1500]
v_cache_cross_19: float16[16, 1, 1500, 64]
k_cache_self_20_in: float16[16, 1, 64, 199]
v_cache_self_20_in: float16[16, 1, 199, 64]
k_cache_cross_20: float16[16, 1, 64, 1500]
v_cache_cross_20: float16[16, 1, 1500, 64]
k_cache_self_21_in: float16[16, 1, 64, 199]
v_cache_self_21_in: float16[16, 1, 199, 64]
k_cache_cross_21: float16[16, 1, 64, 1500]
v_cache_cross_21: float16[16, 1, 1500, 64]
k_cache_self_22_in: float16[16, 1, 64, 199]
v_cache_self_22_in: float16[16, 1, 199, 64]
k_cache_cross_22: float16[16, 1, 64, 1500]
v_cache_cross_22: float16[16, 1, 1500, 64]
k_cache_self_23_in: float16[16, 1, 64, 199]
v_cache_self_23_in: float16[16, 1, 199, 64]
k_cache_cross_23: float16[16, 1, 64, 1500]
v_cache_cross_23: float16[16, 1, 1500, 64]
Completion Time
4/18/2026, 9:01:15 AM
Versions
  • QAIRT: v2.45.0.260326154327
  • QNN Backend API: 5.45.0
  • QNN Core API: 2.34.0
  • Android: 14 (UQ1A.240205.002)
  • Build ID: SA8295P.HQX.4.5.6.0-00006-STD.PROD-1
  • AI Hub: aihub-2026.04.13.0
Estimated Inference Time
38.0 ms
Estimated Peak Memory Usage
112 ‑ 118 MB
Compute Units
NPU
5889
StageTimeMemory
First App Load
429 ms2‑8 MB
Subsequent App Load
431 ms1‑7 MB
Inference
38.0 ms112‑118 MB
QNNValue
default_graph_options.htp_options.optimizations[0].typeFINALIZE_OPTIMIZATION_FLAG
default_graph_options.htp_options.optimizations[0].value3.0
default_graph_options.htp_options.precisionFLOAT16
default_graph_options.htp_options.vtcm_size0

Sign up to run this model on a hosted Qualcomm® device!

Run on device