Profile Job Results

Jobs
jgo19q345
Results Ready
Name
whisper_medium_decoder
Target Device
  • Samsung Galaxy S24
  • Android 14
  • Snapdragon® 8 Gen 3 | SM8650
Creator
ai-hub-support@qti.qualcomm.com
Target Model
Input Specs
input_ids: int32[1, 1]
position_ids: int32[1]
k_cache_self_0_in: float16[16, 1, 64, 199]
v_cache_self_0_in: float16[16, 1, 199, 64]
attention_mask: float16[1, 1, 1, 200]
k_cache_cross_0: float16[16, 1, 64, 1500]
v_cache_cross_0: float16[16, 1, 1500, 64]
k_cache_self_1_in: float16[16, 1, 64, 199]
v_cache_self_1_in: float16[16, 1, 199, 64]
k_cache_cross_1: float16[16, 1, 64, 1500]
v_cache_cross_1: float16[16, 1, 1500, 64]
k_cache_self_2_in: float16[16, 1, 64, 199]
v_cache_self_2_in: float16[16, 1, 199, 64]
k_cache_cross_2: float16[16, 1, 64, 1500]
v_cache_cross_2: float16[16, 1, 1500, 64]
k_cache_self_3_in: float16[16, 1, 64, 199]
v_cache_self_3_in: float16[16, 1, 199, 64]
k_cache_cross_3: float16[16, 1, 64, 1500]
v_cache_cross_3: float16[16, 1, 1500, 64]
k_cache_self_4_in: float16[16, 1, 64, 199]
v_cache_self_4_in: float16[16, 1, 199, 64]
k_cache_cross_4: float16[16, 1, 64, 1500]
v_cache_cross_4: float16[16, 1, 1500, 64]
k_cache_self_5_in: float16[16, 1, 64, 199]
v_cache_self_5_in: float16[16, 1, 199, 64]
k_cache_cross_5: float16[16, 1, 64, 1500]
v_cache_cross_5: float16[16, 1, 1500, 64]
k_cache_self_6_in: float16[16, 1, 64, 199]
v_cache_self_6_in: float16[16, 1, 199, 64]
k_cache_cross_6: float16[16, 1, 64, 1500]
v_cache_cross_6: float16[16, 1, 1500, 64]
k_cache_self_7_in: float16[16, 1, 64, 199]
v_cache_self_7_in: float16[16, 1, 199, 64]
k_cache_cross_7: float16[16, 1, 64, 1500]
v_cache_cross_7: float16[16, 1, 1500, 64]
k_cache_self_8_in: float16[16, 1, 64, 199]
v_cache_self_8_in: float16[16, 1, 199, 64]
k_cache_cross_8: float16[16, 1, 64, 1500]
v_cache_cross_8: float16[16, 1, 1500, 64]
k_cache_self_9_in: float16[16, 1, 64, 199]
v_cache_self_9_in: float16[16, 1, 199, 64]
k_cache_cross_9: float16[16, 1, 64, 1500]
v_cache_cross_9: float16[16, 1, 1500, 64]
k_cache_self_10_in: float16[16, 1, 64, 199]
v_cache_self_10_in: float16[16, 1, 199, 64]
k_cache_cross_10: float16[16, 1, 64, 1500]
v_cache_cross_10: float16[16, 1, 1500, 64]
k_cache_self_11_in: float16[16, 1, 64, 199]
v_cache_self_11_in: float16[16, 1, 199, 64]
k_cache_cross_11: float16[16, 1, 64, 1500]
v_cache_cross_11: float16[16, 1, 1500, 64]
k_cache_self_12_in: float16[16, 1, 64, 199]
v_cache_self_12_in: float16[16, 1, 199, 64]
k_cache_cross_12: float16[16, 1, 64, 1500]
v_cache_cross_12: float16[16, 1, 1500, 64]
k_cache_self_13_in: float16[16, 1, 64, 199]
v_cache_self_13_in: float16[16, 1, 199, 64]
k_cache_cross_13: float16[16, 1, 64, 1500]
v_cache_cross_13: float16[16, 1, 1500, 64]
k_cache_self_14_in: float16[16, 1, 64, 199]
v_cache_self_14_in: float16[16, 1, 199, 64]
k_cache_cross_14: float16[16, 1, 64, 1500]
v_cache_cross_14: float16[16, 1, 1500, 64]
k_cache_self_15_in: float16[16, 1, 64, 199]
v_cache_self_15_in: float16[16, 1, 199, 64]
k_cache_cross_15: float16[16, 1, 64, 1500]
v_cache_cross_15: float16[16, 1, 1500, 64]
k_cache_self_16_in: float16[16, 1, 64, 199]
v_cache_self_16_in: float16[16, 1, 199, 64]
k_cache_cross_16: float16[16, 1, 64, 1500]
v_cache_cross_16: float16[16, 1, 1500, 64]
k_cache_self_17_in: float16[16, 1, 64, 199]
v_cache_self_17_in: float16[16, 1, 199, 64]
k_cache_cross_17: float16[16, 1, 64, 1500]
v_cache_cross_17: float16[16, 1, 1500, 64]
k_cache_self_18_in: float16[16, 1, 64, 199]
v_cache_self_18_in: float16[16, 1, 199, 64]
k_cache_cross_18: float16[16, 1, 64, 1500]
v_cache_cross_18: float16[16, 1, 1500, 64]
k_cache_self_19_in: float16[16, 1, 64, 199]
v_cache_self_19_in: float16[16, 1, 199, 64]
k_cache_cross_19: float16[16, 1, 64, 1500]
v_cache_cross_19: float16[16, 1, 1500, 64]
k_cache_self_20_in: float16[16, 1, 64, 199]
v_cache_self_20_in: float16[16, 1, 199, 64]
k_cache_cross_20: float16[16, 1, 64, 1500]
v_cache_cross_20: float16[16, 1, 1500, 64]
k_cache_self_21_in: float16[16, 1, 64, 199]
v_cache_self_21_in: float16[16, 1, 199, 64]
k_cache_cross_21: float16[16, 1, 64, 1500]
v_cache_cross_21: float16[16, 1, 1500, 64]
k_cache_self_22_in: float16[16, 1, 64, 199]
v_cache_self_22_in: float16[16, 1, 199, 64]
k_cache_cross_22: float16[16, 1, 64, 1500]
v_cache_cross_22: float16[16, 1, 1500, 64]
k_cache_self_23_in: float16[16, 1, 64, 199]
v_cache_self_23_in: float16[16, 1, 199, 64]
k_cache_cross_23: float16[16, 1, 64, 1500]
v_cache_cross_23: float16[16, 1, 1500, 64]
Completion Time
4/18/2026, 8:57:26 AM
Versions
  • QAIRT: v2.45.0.260326154327
  • QNN Backend API: 5.45.0
  • QNN Core API: 2.34.0
  • Android: 14 (UP1A.231005.007)
  • AI Hub: aihub-2026.04.13.0
Estimated Inference Time
28.4 ms
Estimated Peak Memory Usage
160 ‑ 173 MB
Compute Units
NPU
5889
StageTimeMemory
First App Load
448 ms2‑9 MB
Subsequent App Load
393 ms3‑12 MB
Inference
28.4 ms160‑173 MB
QNNValue
context_options.htp_options.performance_modeBURST
default_graph_options.htp_options.optimizations[0].typeFINALIZE_OPTIMIZATION_FLAG
default_graph_options.htp_options.optimizations[0].value3.0
default_graph_options.htp_options.precisionFLOAT16
default_graph_options.htp_options.vtcm_size0

Sign up to run this model on a hosted Qualcomm® device!

Run on device