Qwen3-4B

State‑of‑the‑art large language model useful on a variety of language understanding and generation tasks.

The Qwen3‑4B is a state‑of‑the‑art multilingual base language model with 4 billion parameters, excelling in language understanding, generation, coding, and mathematics.

Technical Details

Input sequence length for Prompt Processor:128
Context lengths:512,1024,2048,3072,4096
Use:Initiate conversation with prompt-processor and then token generator for subsequent iterations.
Minimum QNN SDK version required:2.42.0
Supported languages:100+ languages and dialects.
TTFT:Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
Response Rate:Rate of response generation after the first response token. Measured on a short prompt with a long response (with thinking); may slow down when using longer context lengths.

Applicable Scenarios

  • Dialogue
  • Content Generation
  • Customer Support

License

Tags

  • llm
  • generative-ai

Supported IoT Devices

  • Dragonwing IQ-9075 EVK

Supported IoT Chipsets

  • Qualcomm® QCS9075

Related Models

See all models

Looking for more? See models created by industry leaders.

Discover Model Makers