Profile Job Results
Jobs
joprxl39p
Results Ready
Name
whisper_tiny_en_WhisperDecoder
Target Device
- XR2 Gen 2 (Proxy)
- Android 13
Creator
ai-hub-support@qti.qualcomm.com
Target Model
Input Specs
x
: int32[1, 1]index
: int32[1, 1]k_cache_cross
: float32[4, 6, 64, 1500]v_cache_cross
: float32[4, 6, 1500, 64]k_cache_self
: float32[4, 6, 64, 224]v_cache_self
: float32[4, 6, 224, 64]Completion Time
8/11/2024, 6:40:44 AM
Versions
- TensorFlow Lite: 2.16.1
- QNN TfLite Delegate: v2.24.0.240626131148_96320
- Android: 13 (TP1A.220624.014)
- AI Hub: aihub-2024.08.01.0
Estimated Inference Time
16.3 ms
Estimated Peak Memory Usage
16 - 85 MB
Compute Units
NPU
555
CPU
1
GPU
1
Stage | Time | Memory |
---|---|---|
First App Load | 2.52 s | 286-296 MB |
Subsequent App Load | 639 ms | 52-123 MB |
Inference | 16.3 ms | 16-85 MB |
TensorFlow Lite | Value |
---|---|
number_of_threads | 4 |
QNN Delegate | Value |
---|---|
backend_type | kHtpBackend |
log_level | kLogLevelWarn |
htp_options.performance_mode | kHtpBurst |
htp_options.precision | kHtpFp16 |
htp_options.useConvHmx | true |
NNAPI Delegate | Value |
---|---|
accelerator_name | "qti-dsp" |
execution_preference | kSustainedSpeed |
allow_fp16 | true |
GPUv2 Delegate | Value |
---|---|
inference_preference | TFLITE_GPU_INFERENCE_PREFERENCE_SUSTAINED_SPEED |
inference_priority1 | TFLITE_GPU_INFERENCE_PRIORITY_MIN_LATENCY |
inference_priority2 | TFLITE_GPU_INFERENCE_PRIORITY_MIN_MEMORY_USAGE |
inference_priority3 | TFLITE_GPU_INFERENCE_PRIORITY_MAX_PRECISION |
XNNPACK Delegate | Value |
---|
Sign up to run this model on a hosted Qualcomm® device!
Run on device