HomeIoT ModelsWhisper-Small-En

    Whisper-Small-En

    Automatic speech recognition (ASR) model for English transcription as well as translation.

    OpenAI’s Whisper ASR (Automatic Speech Recognition) model is a state-of-the-art system designed for transcribing spoken language into written text. It exhibits robust performance in realistic, noisy environments, making it highly reliable for real-world applications. Specifically, it excels in long-form transcription, capable of accurately transcribing audio clips up to 30 seconds long. Time to the first token is the encoder's latency, while time to each additional token is decoder's latency, where we assume a mean decoded length specified below.

    Qualcomm® QCS8550
    QCS8550 (Proxy)
    TorchScriptTFLite
    602ms
    Inference Time
    70-499MB
    Memory Usage
    585GPU
    Layers

    Technical Details

    Model checkpoint:small.en
    Input resolution:80x3000 (30 seconds audio)
    Mean decoded sequence length:112 tokens
    Number of parameters (WhisperEncoder):102M
    Model size (WhisperEncoder):390 MB
    Number of parameters (WhisperDecoder):139M
    Model size (WhisperDecoder):531 MB

    Applicable Scenarios

    • Smart Home
    • Accessibility

    Licenses

    Source Model:MIT
    Deployable Model:AI Model Hub License

    Tags

    • foundation
      A “foundation” model is versatile and designed for multi-task capabilities, without the need for fine-tuning.

    Supported IoT Devices

    • QCS8550 (Proxy)

    Supported IoT Chipsets

    • Qualcomm® QCS8550