Profile Job Results
Jobs
jqpy8o37g
Results Ready
Name
whisper_tiny_en_WhisperDecoder
Target Device
- SA8650 (Proxy)
- Android 13
Creator
ai-hub-support@qti.qualcomm.com
Target Model
Input Specs
x
: int32[1, 1]index
: int32[1, 1]k_cache_cross
: float32[4, 6, 64, 1500]v_cache_cross
: float32[4, 6, 1500, 64]k_cache_self
: float32[4, 6, 64, 224]v_cache_self
: float32[4, 6, 224, 64]Completion Time
8/11/2024, 7:54:44 AM
Versions
- TensorFlow Lite: 2.16.1
- QNN TfLite Delegate: v2.24.0.240626131148_96320
- Android: 13 (TP1A.220624.014)
- AI Hub: aihub-2024.08.01.0
Estimated Inference Time
10.5 ms
Estimated Peak Memory Usage
21 - 28 MB
Compute Units
NPU
552
CPU
4
GPU
1
Stage | Time | Memory |
---|---|---|
First App Load | 1.52 s | 309-310 MB |
Subsequent App Load | 282 ms | 63-305 MB |
Inference | 10.5 ms | 21-28 MB |
TensorFlow Lite | Value |
---|---|
number_of_threads | 4 |
QNN Delegate | Value |
---|---|
backend_type | kHtpBackend |
log_level | kLogLevelWarn |
htp_options.performance_mode | kHtpBurst |
htp_options.precision | kHtpFp16 |
htp_options.useConvHmx | true |
GPUv2 Delegate | Value |
---|---|
inference_preference | TFLITE_GPU_INFERENCE_PREFERENCE_SUSTAINED_SPEED |
inference_priority1 | TFLITE_GPU_INFERENCE_PRIORITY_MIN_LATENCY |
inference_priority2 | TFLITE_GPU_INFERENCE_PRIORITY_MIN_MEMORY_USAGE |
inference_priority3 | TFLITE_GPU_INFERENCE_PRIORITY_MAX_PRECISION |
XNNPACK Delegate | Value |
---|
Sign up to run this model on a hosted Qualcomm® device!
Run on device