Qualcomm® AI HubAI Hub
HomeCompute ModelsWhisper-Tiny-En

Whisper-Tiny-En

Automatic speech recognition (ASR) model for English transcription as well as translation.

OpenAI’s Whisper ASR (Automatic Speech Recognition) model is a state-of-the-art system designed for transcribing spoken language into written text. It exhibits robust performance in realistic, noisy environments, making it highly reliable for real-world applications. Specifically, it excels in long-form transcription, capable of accurately transcribing audio clips up to 30 seconds long. Time to the first token is the encoder's latency, while time to each additional token is decoder's latency, where we assume a mean decoded length specified below.

Snapdragon® X Elite
TorchScripttoQualcomm® AI Engine Direct
238ms
Inference Time
1MB
Memory Usage
337NPU
Layers

Technical Details

Model checkpoint:tiny.en
Input resolution:80x3000 (30 seconds audio)
Mean decoded sequence length:112 tokens
Number of parameters (WhisperEncoder):9.39M
Model size (WhisperEncoder):35.9 MB
Number of parameters (WhisperDecoder):28.2M
Model size (WhisperDecoder):108 MB

Applicable Scenarios

  • Smart Home
  • Accessibility

Licenses

Source Model:MIT
Deployable Model:AI Model Hub License

Tags

  • foundation
    A “foundation” model is versatile and designed for multi-task capabilities, without the need for fine-tuning.

Supported Compute Chipsets

  • Snapdragon® X Elite