Qualcomm® AI HubAI Hub

Whisper-Tiny-En

Automatic speech recognition (ASR) model for English transcription as well as translation.

OpenAI’s Whisper ASR (Automatic Speech Recognition) model is a state‑of‑the‑art system designed for transcribing spoken language into written text. It exhibits robust performance in realistic, noisy environments, making it highly reliable for real‑world applications. Specifically, it excels in long‑form transcription, capable of accurately transcribing audio clips up to 30 seconds long. Time to the first token is the encoder's latency, while time to each additional token is decoder's latency, where we assume a mean decoded length specified below.

Technical Details

Model checkpoint:tiny.en
Input resolution:80x3000 (30 seconds audio)
Mean decoded sequence length:112 tokens
Number of parameters (WhisperEncoder):9.39M
Model size (WhisperEncoder):35.9 MB
Number of parameters (WhisperDecoder):28.2M
Model size (WhisperDecoder):108 MB

Applicable Scenarios

  • Smart Home
  • Accessibility

Licenses

Source Model:MIT
Deployable Model:AI Model Hub License

Tags

  • foundation

Supported Automotive Devices

  • SA7255P ADP
  • SA8255 (Proxy)
  • SA8295P ADP
  • SA8650 (Proxy)
  • SA8775P ADP

Supported Automotive Chipsets

  • Qualcomm® SA7255P
  • Qualcomm® SA8255P (Proxy)
  • Qualcomm® SA8295P
  • Qualcomm® SA8650P (Proxy)
  • Qualcomm® SA8775P

Related Models

See all models

Looking for more? See models created by industry leaders.

Discover Model Makers