IBM-Granite-3B-Code-Instruct
State‑of‑the‑art large language model useful on a variety of code understanding and generation tasks.
Granite‑3B‑Code‑Instruct‑2K is a 3B parameter model fine tuned from Granite‑3B‑Code‑Base‑2K on a combination of permissively licensed instruction data to enhance instruction following capabilities including logical reasoning and problem‑solving skills.
Technical Details
Input sequence length for Prompt Processor:128
Context length:2048
Number of parameters:3.48B
Precision:fp16
Num of key-value heads:32
Information about the model parts:Prompt Processor and Token Generator are split into 4 parts each. Each corresponding Prompt Processor and Token Generator part share weights.
Prompt processor model size:7 GB
Prompt processor input (part1):128 tokens
Prompt processor output (part1):Embeddings output
Prompt processor input (other parts):128 tokens + KVCache initialized with pad token
Prompt processor output (other parts):128 output tokens + KVCache for token generator
Token generator model size:7 GB
Token generator input (part1):1 token
Token generator output (part1):Embeddings output
Token generator input (other parts):1 input token + past KVCache
Token generator output (other parts):1 output token + KVCache for next iteration
Use:Initiate conversation with prompt-processor and then token generator for subsequent iterations.
Supported natural languages:English
Supported programming languages:The Granite code foundation models support 116 programming languages including Python, Javascript, Java, C++, Go, and Rust.
Minimum QNN SDK version required:2.27.7
TTFT:Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (2048 tokens).
Response Rate:Rate of response generation after the first response token.
Applicable Scenarios
- Coding
- Coding assist
Supported Form Factors
- Phone
- Tablet
Licenses
Source Model:APACHE-2.0
Deployable Model:APACHE-2.
Terms of Use:Qualcomm® Generative AI usage and limitations
Tags
- llm
- generative-ai
Supported Devices
- Samsung Galaxy S24
- Samsung Galaxy S24 Ultra
- Samsung Galaxy S24+
- Snapdragon 8 Elite QRD
Supported Chipsets
- Snapdragon® 8 Elite Mobile
- Snapdragon® 8 Gen 3 Mobile
Models from IBM watsonx
See all model makersLooking for more? See pre‑optimized models for all solutions.
Browse All