Whisper-Base-En
Automatic speech recognition (ASR) model for English transcription as well as translation.
OpenAI’s Whisper ASR (Automatic Speech Recognition) model is a state-of-the-art system designed for transcribing spoken language into written text. It exhibits robust performance in realistic, noisy environments, making it highly reliable for real-world applications. Specifically, it excels in long-form transcription, capable of accurately transcribing audio clips up to 30 seconds long. Time to the first token is the encoder's latency, while time to each additional token is decoder's latency, where we assume a mean decoded length specified below.
Technical Details
Model checkpoint:base.en
Input resolution:80x3000 (30 seconds audio)
Mean decoded sequence length:112 tokens
Number of parameters (WhisperEncoder):23.7M
Model size (WhisperEncoder):90.6 MB
Number of parameters (WhisperDecoder):48.6M
Model size (WhisperDecoder):186 MB
Applicable Scenarios
- Smart Home
- Accessibility
Supported Mobile Form Factors
- Phone
- Tablet
Licenses
Source Model:MIT
Deployable Model:AI Model Hub License
Tags
- foundationA “foundation” model is versatile and designed for multi-task capabilities, without the need for fine-tuning.
Supported Mobile Devices
- Google Pixel 3
- Google Pixel 3a
- Google Pixel 3a XL
- Google Pixel 4
- Google Pixel 4a
- Google Pixel 5a 5G
- Samsung Galaxy S21
- Samsung Galaxy S21 Ultra
- Samsung Galaxy S21+
- Samsung Galaxy S22 5G
- Samsung Galaxy S22 Ultra 5G
- Samsung Galaxy S22+ 5G
- Samsung Galaxy S23
- Samsung Galaxy S23 Ultra
- Samsung Galaxy S23+
- Samsung Galaxy S24
- Samsung Galaxy S24 Ultra
- Samsung Galaxy S24+
- Samsung Galaxy Tab S8
- Xiaomi 12
- Xiaomi 12 Pro
Supported Mobile Chipsets
- Snapdragon® 8 Gen 1 Mobile
- Snapdragon® 8 Gen 2 Mobile
- Snapdragon® 8 Gen 3 Mobile
- Snapdragon® 888 Mobile