Profile Job Results
Jobs
j7gj3r2vp
Results Ready
Name
whisper_base_en_WhisperDecoder
Target Device
- XR2 Gen 2 (Proxy)
- Android 13
Creator
ai-hub-support@qti.qualcomm.com
Target Model
Input Specs
x
: int32[1, 1]index
: int32[1, 1]k_cache_cross
: float32[6, 8, 64, 1500]v_cache_cross
: float32[6, 8, 1500, 64]k_cache_self
: float32[6, 8, 64, 224]v_cache_self
: float32[6, 8, 224, 64]Completion Time
8/11/2024, 5:48:30 AM
Versions
- TensorFlow Lite: 2.16.1
- QNN TfLite Delegate: v2.24.0.240626131148_96320
- Android: 13 (TP1A.220624.014)
- AI Hub: aihub-2024.08.01.0
Estimated Inference Time
32.9 ms
Estimated Peak Memory Usage
42 - 123 MB
Compute Units
NPU
979
CPU
3
GPU
1
Stage | Time | Memory |
---|---|---|
First App Load | 4.14 s | 586-596 MB |
Subsequent App Load | 768 ms | 87-169 MB |
Inference | 32.9 ms | 42-123 MB |
TensorFlow Lite | Value |
---|---|
number_of_threads | 4 |
QNN Delegate | Value |
---|---|
backend_type | kHtpBackend |
log_level | kLogLevelWarn |
htp_options.performance_mode | kHtpBurst |
htp_options.precision | kHtpFp16 |
htp_options.useConvHmx | true |
NNAPI Delegate | Value |
---|---|
accelerator_name | "qti-dsp" |
execution_preference | kSustainedSpeed |
allow_fp16 | true |
GPUv2 Delegate | Value |
---|---|
inference_preference | TFLITE_GPU_INFERENCE_PREFERENCE_SUSTAINED_SPEED |
inference_priority1 | TFLITE_GPU_INFERENCE_PRIORITY_MIN_LATENCY |
inference_priority2 | TFLITE_GPU_INFERENCE_PRIORITY_MIN_MEMORY_USAGE |
inference_priority3 | TFLITE_GPU_INFERENCE_PRIORITY_MAX_PRECISION |
XNNPACK Delegate | Value |
---|
Sign up to run this model on a hosted Qualcomm® device!
Run on device