Profile Job Results
Jobs
jg9wyzwmp
Results Ready
Name
sam2_w8a8_SAM2Decoder
Target Device
- QCS6490 (Proxy)
- Android 12
- Qualcomm® QCS6490
Creator
ai-hub-support@qti.qualcomm.com
Target Model
Input Specs
image_embeddings: uint8[1, 64, 64, 256]high_res_features1: uint8[1, 256, 256, 32]high_res_features2: uint8[1, 128, 128, 64]sparse_embedding: uint8[1, 3, 256]Completion Time
10/26/2025, 3:09:31 AM
Options
--qairt_version latestVersions
- TensorFlow Lite: 2.17.0
- QAIRT: v2.39.0.250925215840_163802
- QNN TfLite Delegate: v2.39.0.250925215840_163802
- Android: 12 (SP1A.210812.016)
- AI Hub: aihub-2025.10.21.0
Estimated Inference Time
36.0 ms
Estimated Peak Memory Usage
13 ‑ 77 MB
Compute Units
NPU
777
CPU
70
GPU
20
| Stage | Time | Memory | 
|---|---|---|
| First App Load | 12.1 s | 200‑210 MB | 
| Subsequent App Load | 12.3 s | 180‑344 MB | 
| Inference | 36.0 ms | 13‑77 MB | 
| TensorFlow Lite | Value | 
|---|---|
| number_of_threads | 4 | 
| QNN Delegate | Value | 
|---|---|
| backend_type | kHtpBackend | 
| log_level | kLogLevelWarn | 
| htp_options.performance_mode | kHtpBurst | 
| htp_options.precision | kHtpFp16 | 
| htp_options.useConvHmx | true | 
| GPUv2 Delegate | Value | 
|---|---|
| inference_preference | TFLITE_GPU_INFERENCE_PREFERENCE_SUSTAINED_SPEED | 
| inference_priority1 | TFLITE_GPU_INFERENCE_PRIORITY_MIN_LATENCY | 
| inference_priority2 | TFLITE_GPU_INFERENCE_PRIORITY_MIN_MEMORY_USAGE | 
| inference_priority3 | TFLITE_GPU_INFERENCE_PRIORITY_MAX_PRECISION | 
| XNNPACK Delegate | Value | 
|---|
Sign up to run this model on a hosted Qualcomm® device!
Run on device






