Qwen3-4B-Instruct-2507

State‑of‑the‑art large language model with instruct‑only training, useful on a variety of language understanding and generation tasks.

The Qwen3‑4B‑Instruct‑2507 is a state‑of‑the‑art multilingual instruct language model with 4 billion parameters, excelling in language understanding, generation, coding, and mathematics. Unlike the base Qwen3‑4B, this variant has only instruct training without thinking mode support.

Technical Details

Input sequence length for Prompt Processor:128
Context lengths:512,1024,2048,3072,4096
Use:Initiate conversation with prompt-processor and then token generator for subsequent iterations.
Minimum QNN SDK version required:2.42.0
Supported languages:100+ languages and dialects.
TTFT:Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
Response Rate:Rate of response generation after the first response token. Measured on a short prompt with a long response; may slow down when using longer context lengths.

Applicable Scenarios

  • Dialogue
  • Content Generation
  • Customer Support

Supported Mobile Form Factors

  • Phone
  • Tablet

License

Tags

  • llm
  • generative-ai

Supported Mobile Devices

  • Samsung Galaxy S25
  • Snapdragon 8 Elite Gen 5 QRD

Supported Mobile Chipsets

  • Snapdragon® 8 Elite For Galaxy Mobile
  • Snapdragon® 8 Elite Gen 5 Mobile

Related Models

See all models

Looking for more? See models created by industry leaders.

Discover Model Makers