Profile Job Results
Jobs
jwgodlkq5
Results Ready
Name
whisper_base_en_WhisperDecoder
Target Device
- QCS8550 (Proxy)
- Android 12
Creator
ai-hub-support@qti.qualcomm.com
Target Model
Input Specs
x
: int32[1, 1]index
: int32[1, 1]k_cache_cross
: float32[6, 8, 64, 1500]v_cache_cross
: float32[6, 8, 1500, 64]k_cache_self
: float32[6, 8, 64, 224]v_cache_self
: float32[6, 8, 224, 64]Completion Time
8/11/2024, 5:49:34 AM
Versions
- TensorFlow Lite: 2.16.1
- QNN TfLite Delegate: v2.24.0.240626131148_96320
- Android: 13 (TP1A.220624.014)
- AI Hub: aihub-2024.08.01.0
Estimated Inference Time
24.7 ms
Estimated Peak Memory Usage
42 - 44 MB
Compute Units
NPU
976
CPU
6
GPU
1
Stage | Time | Memory |
---|---|---|
First App Load | 3.02 s | 597-599 MB |
Subsequent App Load | 480 ms | 98-721 MB |
Inference | 24.7 ms | 42-44 MB |
TensorFlow Lite | Value |
---|---|
number_of_threads | 4 |
QNN Delegate | Value |
---|---|
backend_type | kHtpBackend |
log_level | kLogLevelWarn |
htp_options.performance_mode | kHtpBurst |
htp_options.precision | kHtpFp16 |
htp_options.useConvHmx | true |
GPUv2 Delegate | Value |
---|---|
inference_preference | TFLITE_GPU_INFERENCE_PREFERENCE_SUSTAINED_SPEED |
inference_priority1 | TFLITE_GPU_INFERENCE_PRIORITY_MIN_LATENCY |
inference_priority2 | TFLITE_GPU_INFERENCE_PRIORITY_MIN_MEMORY_USAGE |
inference_priority3 | TFLITE_GPU_INFERENCE_PRIORITY_MAX_PRECISION |
XNNPACK Delegate | Value |
---|
Sign up to run this model on a hosted Qualcomm® device!
Run on device