Profile Job Results

Jobs
jpx7z2yjg
Results Ready
Name
whisper_medium_decoder
Target Device
  • Snapdragon X Elite CRD
  • Windows 11
  • Snapdragon® X Elite | SC8380XP
Creator
ai-hub-support@qti.qualcomm.com
Target Model
Input Specs
input_ids: int32[1, 1]
position_ids: int32[1]
k_cache_self_0_in: float16[16, 1, 64, 199]
v_cache_self_0_in: float16[16, 1, 199, 64]
attention_mask: float16[1, 1, 1, 200]
k_cache_cross_0: float16[16, 1, 64, 1500]
v_cache_cross_0: float16[16, 1, 1500, 64]
k_cache_self_1_in: float16[16, 1, 64, 199]
v_cache_self_1_in: float16[16, 1, 199, 64]
k_cache_cross_1: float16[16, 1, 64, 1500]
v_cache_cross_1: float16[16, 1, 1500, 64]
k_cache_self_2_in: float16[16, 1, 64, 199]
v_cache_self_2_in: float16[16, 1, 199, 64]
k_cache_cross_2: float16[16, 1, 64, 1500]
v_cache_cross_2: float16[16, 1, 1500, 64]
k_cache_self_3_in: float16[16, 1, 64, 199]
v_cache_self_3_in: float16[16, 1, 199, 64]
k_cache_cross_3: float16[16, 1, 64, 1500]
v_cache_cross_3: float16[16, 1, 1500, 64]
k_cache_self_4_in: float16[16, 1, 64, 199]
v_cache_self_4_in: float16[16, 1, 199, 64]
k_cache_cross_4: float16[16, 1, 64, 1500]
v_cache_cross_4: float16[16, 1, 1500, 64]
k_cache_self_5_in: float16[16, 1, 64, 199]
v_cache_self_5_in: float16[16, 1, 199, 64]
k_cache_cross_5: float16[16, 1, 64, 1500]
v_cache_cross_5: float16[16, 1, 1500, 64]
k_cache_self_6_in: float16[16, 1, 64, 199]
v_cache_self_6_in: float16[16, 1, 199, 64]
k_cache_cross_6: float16[16, 1, 64, 1500]
v_cache_cross_6: float16[16, 1, 1500, 64]
k_cache_self_7_in: float16[16, 1, 64, 199]
v_cache_self_7_in: float16[16, 1, 199, 64]
k_cache_cross_7: float16[16, 1, 64, 1500]
v_cache_cross_7: float16[16, 1, 1500, 64]
k_cache_self_8_in: float16[16, 1, 64, 199]
v_cache_self_8_in: float16[16, 1, 199, 64]
k_cache_cross_8: float16[16, 1, 64, 1500]
v_cache_cross_8: float16[16, 1, 1500, 64]
k_cache_self_9_in: float16[16, 1, 64, 199]
v_cache_self_9_in: float16[16, 1, 199, 64]
k_cache_cross_9: float16[16, 1, 64, 1500]
v_cache_cross_9: float16[16, 1, 1500, 64]
k_cache_self_10_in: float16[16, 1, 64, 199]
v_cache_self_10_in: float16[16, 1, 199, 64]
k_cache_cross_10: float16[16, 1, 64, 1500]
v_cache_cross_10: float16[16, 1, 1500, 64]
k_cache_self_11_in: float16[16, 1, 64, 199]
v_cache_self_11_in: float16[16, 1, 199, 64]
k_cache_cross_11: float16[16, 1, 64, 1500]
v_cache_cross_11: float16[16, 1, 1500, 64]
k_cache_self_12_in: float16[16, 1, 64, 199]
v_cache_self_12_in: float16[16, 1, 199, 64]
k_cache_cross_12: float16[16, 1, 64, 1500]
v_cache_cross_12: float16[16, 1, 1500, 64]
k_cache_self_13_in: float16[16, 1, 64, 199]
v_cache_self_13_in: float16[16, 1, 199, 64]
k_cache_cross_13: float16[16, 1, 64, 1500]
v_cache_cross_13: float16[16, 1, 1500, 64]
k_cache_self_14_in: float16[16, 1, 64, 199]
v_cache_self_14_in: float16[16, 1, 199, 64]
k_cache_cross_14: float16[16, 1, 64, 1500]
v_cache_cross_14: float16[16, 1, 1500, 64]
k_cache_self_15_in: float16[16, 1, 64, 199]
v_cache_self_15_in: float16[16, 1, 199, 64]
k_cache_cross_15: float16[16, 1, 64, 1500]
v_cache_cross_15: float16[16, 1, 1500, 64]
k_cache_self_16_in: float16[16, 1, 64, 199]
v_cache_self_16_in: float16[16, 1, 199, 64]
k_cache_cross_16: float16[16, 1, 64, 1500]
v_cache_cross_16: float16[16, 1, 1500, 64]
k_cache_self_17_in: float16[16, 1, 64, 199]
v_cache_self_17_in: float16[16, 1, 199, 64]
k_cache_cross_17: float16[16, 1, 64, 1500]
v_cache_cross_17: float16[16, 1, 1500, 64]
k_cache_self_18_in: float16[16, 1, 64, 199]
v_cache_self_18_in: float16[16, 1, 199, 64]
k_cache_cross_18: float16[16, 1, 64, 1500]
v_cache_cross_18: float16[16, 1, 1500, 64]
k_cache_self_19_in: float16[16, 1, 64, 199]
v_cache_self_19_in: float16[16, 1, 199, 64]
k_cache_cross_19: float16[16, 1, 64, 1500]
v_cache_cross_19: float16[16, 1, 1500, 64]
k_cache_self_20_in: float16[16, 1, 64, 199]
v_cache_self_20_in: float16[16, 1, 199, 64]
k_cache_cross_20: float16[16, 1, 64, 1500]
v_cache_cross_20: float16[16, 1, 1500, 64]
k_cache_self_21_in: float16[16, 1, 64, 199]
v_cache_self_21_in: float16[16, 1, 199, 64]
k_cache_cross_21: float16[16, 1, 64, 1500]
v_cache_cross_21: float16[16, 1, 1500, 64]
k_cache_self_22_in: float16[16, 1, 64, 199]
v_cache_self_22_in: float16[16, 1, 199, 64]
k_cache_cross_22: float16[16, 1, 64, 1500]
v_cache_cross_22: float16[16, 1, 1500, 64]
k_cache_self_23_in: float16[16, 1, 64, 199]
v_cache_self_23_in: float16[16, 1, 199, 64]
k_cache_cross_23: float16[16, 1, 64, 1500]
v_cache_cross_23: float16[16, 1, 1500, 64]
Completion Time
4/18/2026, 9:01:13 AM
Versions
  • QAIRT: v2.45.0.260326154327
  • QNN Backend API: 5.45.0
  • QNN Core API: 2.34.0
  • Windows: Windows 11 (26100)
  • Build ID: APSS.WP_HA.1.0-08200-SC8380XRELSFNWZA-3
  • AI Hub: aihub-2026.04.13.0
Estimated Inference Time
27.5 ms
Estimated Peak Memory Usage
160 MB
Compute Units
NPU
5889
StageTimeMemory
First App Load
881 ms33 MB
Subsequent App Load
1.45 s33 MB
Inference
27.5 ms160 MB
QNNValue
context_options.htp_options.performance_modeBURST
default_graph_options.htp_options.optimizations[0].typeFINALIZE_OPTIMIZATION_FLAG
default_graph_options.htp_options.optimizations[0].value3.0
default_graph_options.htp_options.precisionFLOAT16
default_graph_options.htp_options.vtcm_size0

Sign up to run this model on a hosted Qualcomm® device!

Run on device