IBM-Granite-v3.1-8B-Instruct
State‑of‑the‑art large language model useful on a variety of code understanding and generation tasks.
Granite‑3.1‑8B‑Instruct is a 8B parameter long‑context instruct model finetuned from Granite‑3.1‑8B‑Base using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets tailored for solving long context problems. This model is developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging.
Technical Details
Input sequence length for Prompt Processor:128
Context length:4096
Number of parameters:8B
Precision:w4a16 + w8a16 (few layers)
Num of key-value heads:8
Information about the model parts:Prompt Processor and Token Generator are split into 5 parts each. Each corresponding Prompt Processor and Token Generator part share weights.
Prompt processor model size:4.8 GB
Prompt processor input (part1):128 tokens
Prompt processor output (part1):Embeddings output
Prompt processor input (other parts):128 tokens + KVCache initialized with pad token
Prompt processor output (other parts):128 output tokens + KVCache for token generator
Token generator model size:4.8 GB
Token generator input (part1):1 token
Token generator output (part1):Embeddings output
Token generator input (other parts):1 input token + past KVCache
Token generator output (other parts):1 output token + KVCache for next iteration
Use:Initiate conversation with prompt-processor and then token generator for subsequent iterations.
Supported natural languages:English
Supported programming languages:The Granite code foundation models support 116 programming languages including Python, Javascript, Java, C++, Go, and Rust.
Minimum QNN SDK version required:2.3
TTFT:Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (2048 tokens).
Response Rate:Rate of response generation after the first response token.
Applicable Scenarios
- Coding
- Coding assist
Supported Form Factors
- Phone
- Tablet
Licenses
Source Model:APACHE-2.0
Deployable Model:APACHE-2.0
Terms of Use:Qualcomm® Generative AI usage and limitations
Tags
- llm
- generative-ai
- quantized
Supported Devices
- Snapdragon 8 Elite QRD
- Snapdragon X Elite CRD
Supported Chipsets
- Snapdragon® 8 Elite Mobile
- Snapdragon® X Elite
Models from IBM watsonx
See all model makersLooking for more? See pre‑optimized models for all solutions.
Browse All