Qwen2-7B-Instruct

State‑of‑the‑art large language model useful on a variety of language understanding and generation tasks.

The Qwen2‑7B‑Instruct is a state‑of‑the‑art multilingual language model with 7.07 billion parameters, excelling in language understanding, generation, coding, and mathematics.

Technical Details

Input sequence length for Prompt Processor:128
Context length:4096
Number of parameters:7.07B
Precision:w4a16 + w8a16 (few layers)
Information about the model parts:Prompt Processor and Token Generator are split into 5 parts each. Each corresponding Prompt Processor and Token Generator part share weights.
Supported languages:English, Chinese, German, French, Spanish, Portuguese, Italian, Dutch, Russian, Czech, Polish, Arabic, Persian, Hebrew, Turkish, Japanese, Korean, Vietnamese, Thai, Indonesian, Malay, Lao, Burmese, Cebuano, Khmer, Tagalog, Hindi, Bengali, Urdu.
TTFT:Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
Response Rate:Rate of response generation after the first response token.

Applicable Scenarios

  • Dialogue
  • Content Generation
  • Customer Support

Supported Form Factors

  • Phone
  • Tablet

License

Tags

  • llm
  • generative-ai

Supported Devices

  • Snapdragon 8 Elite QRD

Supported Chipsets

  • Snapdragon® 8 Elite Mobile

Looking for more? See models created by industry leaders.

Discover Model Makers