Whisper-Large-V3-Turbo
Transformer‑based automatic speech recognition (ASR) model for multilingual transcription and translation available on HuggingFace.
Whisper large‑v3‑turbo is a finetuned version of a pruned Whisper large‑v3. In other words, it's the exact same model, except that the number of decoding layers have reduced from 32 to 4. As a result, the model is way faster, at the expense of a minor quality degradation. This model is based on the transformer architecture and has been optimized for edge inference by replacing Multi‑Head Attention (MHA) with Single‑Head Attention (SHA) and linear layers with convolutional (conv) layers. It exhibits robust performance in realistic, noisy environments, making it highly reliable for real‑world applications. Specifically, it excels in long‑form transcription, capable of accurately transcribing audio clips up to 30 seconds long. Time to the first token is the encoder's latency, while time to each additional token is decoder's latency, where we assume a max decoded length specified below.
Technical Details
Applicable Scenarios
- Smart Home
- Accessibility
Licenses
Tags
- foundation
Supported Compute Devices
- Snapdragon X Elite CRD
- Snapdragon X Plus 8-Core CRD
Supported Compute Chipsets
- Snapdragon® X Elite
- Snapdragon® X Plus 8-Core
Related Models
See all modelsLooking for more? See models created by industry leaders.
Discover Model Makers