Profile Job Results

Jobs
j7gj3r2vp
Results Ready
Name
whisper_base_en_WhisperDecoder
Target Device
  • XR2 Gen 2 (Proxy)
  • Android 13
Creator
ai-hub-support@qti.qualcomm.com
Target Model
Input Specs
x: int32[1, 1]
index: int32[1, 1]
k_cache_cross: float32[6, 8, 64, 1500]
v_cache_cross: float32[6, 8, 1500, 64]
k_cache_self: float32[6, 8, 64, 224]
v_cache_self: float32[6, 8, 224, 64]
Completion Time
8/11/2024, 5:48:30 AM
Versions
  • TensorFlow Lite: 2.16.1
  • QNN TfLite Delegate: v2.24.0.240626131148_96320
  • Android: 13 (TP1A.220624.014)
  • AI Hub: aihub-2024.08.01.0
Estimated Inference Time
32.9 ms
Estimated Peak Memory Usage
42 - 123 MB
Compute Units
NPU
979
CPU
3
GPU
1
StageTimeMemory
First App Load
4.14 s586-596 MB
Subsequent App Load
768 ms87-169 MB
Inference
32.9 ms42-123 MB
TensorFlow LiteValue
number_of_threads4
QNN DelegateValue
backend_typekHtpBackend
log_levelkLogLevelWarn
htp_options.performance_modekHtpBurst
htp_options.precisionkHtpFp16
htp_options.useConvHmxtrue
NNAPI DelegateValue
accelerator_name"qti-dsp"
execution_preferencekSustainedSpeed
allow_fp16true
GPUv2 DelegateValue
inference_preferenceTFLITE_GPU_INFERENCE_PREFERENCE_SUSTAINED_SPEED
inference_priority1TFLITE_GPU_INFERENCE_PRIORITY_MIN_LATENCY
inference_priority2TFLITE_GPU_INFERENCE_PRIORITY_MIN_MEMORY_USAGE
inference_priority3TFLITE_GPU_INFERENCE_PRIORITY_MAX_PRECISION
XNNPACK DelegateValue

Sign up to run this model on a hosted Qualcomm® device!

Run on device