Falcon3-7B-Instruct
State‑of‑the‑art large language model useful on a variety of language understanding and generation tasks.
Falcon3 family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B.
Technical Details
Input sequence length for Prompt Processor:128
Context length:4096
Precision:w4a16 + w8a16 (few layers)
Num of key-value heads:4
Model-1 (Prompt Processor):PromptProcessor
Prompt processor input:128 tokens + position embeddings + attention mask + KV cache inputs
Prompt processor output:128 output tokens + KV cache outputs
Model-2 (Token Generator):TokenGenerator
Token generator input:1 input token + position embeddings + attention mask + KV cache inputs
Token generator output:1 output token + KV cache outputs
Use:Initiate conversation with prompt-processor and then token generator for subsequent iterations.
Supported languages:English, French, Spanish, Portuguese.
Minimum QNN SDK version required:2.28.2
TTFT:Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
Response Rate:Rate of response generation after the first response token.
Applicable Scenarios
- Dialogue
- Content Generation
- Customer Support
Supported Mobile Form Factors
- Phone
- Tablet
Licenses
Source Model:FALCON3
Deployable Model:FALCON3
Terms of Use:Qualcomm® Generative AI usage and limitations
Tags
- llm
- generative-ai
Supported Mobile Devices
- Snapdragon 8 Elite QRD
Supported Mobile Chipsets
- Snapdragon® 8 Elite Mobile
Related Models
See all modelsLooking for more? See models created by industry leaders.
Discover Model Makers