Qualcomm® AI HubAI Hub

Whisper-Small-V2

Transformer‑based automatic speech recognition (ASR) model for multilingual transcription and translation available on HuggingFace.

HuggingFace Whisper‑Small ASR (Automatic Speech Recognition) model is a state‑of‑the‑art system designed for transcribing spoken language into written text. This model is based on the transformer architecture and has been optimized for edge inference by replacing Multi‑Head Attention (MHA) with Single‑Head Attention (SHA) and linear layers with convolutional (conv) layers. It exhibits robust performance in realistic, noisy environments, making it highly reliable for real‑world applications. Specifically, it excels in long‑form transcription, capable of accurately transcribing audio clips up to 30 seconds long. Time to the first token is the encoder's latency, while time to each additional token is decoder's latency, where we assume a max decoded length specified below.

Technical Details

Model checkpoint:openai/whisper-small
Input resolution:80x3000 (30 seconds audio)
Max decoded sequence length:200 tokens
Number of parameters (HfWhisperEncoder):102M
Model size (HfWhisperEncoder):391 MB
Number of parameters (HfWhisperDecoder):139M
Model size (HfWhisperDecoder):533 MB

Applicable Scenarios

  • Smart Home
  • Accessibility

Licenses

Source Model:APACHE-2.0
Deployable Model:AI-HUB-MODELS-LICENSE

Tags

  • foundation

Supported Compute Devices

  • Snapdragon X Elite CRD
  • Snapdragon X Plus 8-Core CRD

Supported Compute Chipsets

  • Snapdragon® X Elite
  • Snapdragon® X Plus 8-Core

Related Models

See all models

Looking for more? See models created by industry leaders.

Discover Model Makers