Profile Job Results

Jobs
joprxl39p
Results Ready
Name
whisper_tiny_en_WhisperDecoder
Target Device
  • XR2 Gen 2 (Proxy)
  • Android 13
Creator
ai-hub-support@qti.qualcomm.com
Target Model
Input Specs
x: int32[1, 1]
index: int32[1, 1]
k_cache_cross: float32[4, 6, 64, 1500]
v_cache_cross: float32[4, 6, 1500, 64]
k_cache_self: float32[4, 6, 64, 224]
v_cache_self: float32[4, 6, 224, 64]
Completion Time
8/11/2024, 6:40:44 AM
Versions
  • TensorFlow Lite: 2.16.1
  • QNN TfLite Delegate: v2.24.0.240626131148_96320
  • Android: 13 (TP1A.220624.014)
  • AI Hub: aihub-2024.08.01.0
Estimated Inference Time
16.3 ms
Estimated Peak Memory Usage
16 - 85 MB
Compute Units
NPU
555
CPU
1
GPU
1
StageTimeMemory
First App Load
2.52 s286-296 MB
Subsequent App Load
639 ms52-123 MB
Inference
16.3 ms16-85 MB
TensorFlow LiteValue
number_of_threads4
QNN DelegateValue
backend_typekHtpBackend
log_levelkLogLevelWarn
htp_options.performance_modekHtpBurst
htp_options.precisionkHtpFp16
htp_options.useConvHmxtrue
NNAPI DelegateValue
accelerator_name"qti-dsp"
execution_preferencekSustainedSpeed
allow_fp16true
GPUv2 DelegateValue
inference_preferenceTFLITE_GPU_INFERENCE_PREFERENCE_SUSTAINED_SPEED
inference_priority1TFLITE_GPU_INFERENCE_PRIORITY_MIN_LATENCY
inference_priority2TFLITE_GPU_INFERENCE_PRIORITY_MIN_MEMORY_USAGE
inference_priority3TFLITE_GPU_INFERENCE_PRIORITY_MAX_PRECISION
XNNPACK DelegateValue

Sign up to run this model on a hosted Qualcomm® device!

Run on device